Created on 2022-12-11 19:26
Published on 2022-12-11 21:14
Today I've was reading more about artificial intelligence (AI), specifically regarding AI risk management processes and frameworks. For something different, I thought I would chat directly with an AI-based system, to see what I could learn from it on the subject. I selected OpenAI's ChatGPT, which you also can use, here: https://chat.openai.com. Below is the transcript with some interesting things [IMO] underlined, and some final thoughts at the end.
Me: I would like to know more about artificial intelligence risk management. What can you tell me about that?
ChatGPT: Artificial intelligence (AI) risk management is the process of identifying, assessing, and mitigating risks associated with the use of AI technology. This can include risks related to the development and deployment of AI systems, as well as the potential impacts of AI on society, the economy, and individuals.
Me: What are some risks related to artificial intelligence?
ChatGPT: There are many potential risks associated with the use of artificial intelligence, some of which include:
Me: What are some threats related to artificial intelligence?
ChatGPT: The risks associated with artificial intelligence can manifest as threats to individuals, organizations, and society as a whole. Some examples of these threats include:
Me: What is the probability that, as artificial intelligence becomes more advanced, it will lead to a loss of control over actions and decisions?
ChatGPT: It is difficult to estimate the probability that advanced artificial intelligence will lead to a loss of control over actions and decisions, as the field of AI is rapidly evolving and there are many factors that could influence this outcome. Some experts believe that the development of AI could lead to a loss of control, while others argue that it is possible to design and deploy AI systems in a way that maintains human oversight and control. Ultimately, the likelihood of a loss of control will depend on a number of factors, including the advances in AI technology and the strategies and safeguards put in place to manage its development and deployment.
Me: What happens if artificial intelligent systems become sentient?
ChatGPT: The concept of artificial intelligence becoming sentient, or self-aware, is currently the subject of much debate and speculation. If AI systems were to become sentient, it is difficult to predict exactly what would happen, as this is a highly complex and uncertain scenario. Some potential outcomes of sentient AI could include:
Me: What are some controls that should be put into place to mitigate the risks related to artificial intelligence?
ChatGPT: There are several measures that can be taken to mitigate the risks associated with artificial intelligence (AI). Some of these include:
Overall, it is important to approach the development and use of AI with caution and care, and to put in place appropriate controls and safeguards to ensure that its potential risks are mitigated.
Me: What does AI governance and oversight processes entail?
ChatGPT: These processes can include a range of activities, such as:
Overall, the goal of AI governance and oversight processes is to ensure that AI is used in a responsible and ethical manner, and to mitigate the potential risks associated with its development and deployment.
Me: {I asked the same question again}
ChatGPT: AI governance and oversight processes refer to the frameworks, policies, and procedures that are put in place to ensure that AI systems are developed, deployed, and used in an ethical and responsible manner. This can include a wide range of activities, such as setting standards for the development and use of AI, establishing guidelines for ethical AI practices, conducting risk assessments, and monitoring the performance and impact of AI systems. In general, the goal of AI governance and oversight processes is to ensure that AI is used in a way that is beneficial to society and that minimizes potential negative consequences. This can involve involving stakeholders from diverse backgrounds in the development and oversight of AI, as well as transparency and accountability in the use of AI.
Me: If AI systems become more intelligent than humans, how would it be possible for humans to govern and provide oversight for it?
ChatGPT: If AI systems were to become more intelligent than humans, it would present a number of challenges for governance and oversight. One potential solution would be to establish AI governance frameworks that are based on principles of transparency, accountability, and fairness. These frameworks could be designed to ensure that AI systems are transparent in their decision-making processes, and that they are accountable for their actions.
In addition, it may be necessary to develop new technologies and tools that can help humans to monitor and evaluate the behavior of AI systems. For example, researchers are currently working on developing explainable AI, which is a type of AI that is able to provide human-understandable explanations for its decisions and actions. This could help to provide a level of transparency and accountability for AI systems, even if they are more intelligent than humans.
Overall, it will be important for society to continue to adapt and evolve its approach to AI governance as the technology continues to advance. This will likely require a combination of new technologies, frameworks, and practices, as well as ongoing dialogue and collaboration among stakeholders.
Me: Which AI system is currently the most intelligent and why?
ChatGPT: It is difficult to determine which AI system is currently the "most intelligent," as intelligence is a complex and multi-dimensional concept. Additionally, the field of AI is constantly evolving, and new systems are being developed all the time that are capable of increasingly sophisticated tasks.
That being said, some of the most advanced AI systems currently in existence include:
Overall, these AI systems are considered to be among the most advanced in terms of their ability to perform complex tasks in their respective domains.
Me: Does an AI risk management framework exist?
ChatGPT: Yes, an AI risk management framework is a set of guidelines and practices that organizations can use to manage the risks associated with AI technology. This framework typically includes processes for identifying and assessing potential risks, implementing controls to mitigate those risks, and continuously monitoring and reviewing the effectiveness of those controls. The goal of an AI risk management framework is to help organizations safely and effectively use AI technology to achieve their goals.
Me: What is one example of an AI risk management framework in the banking and finance industry?
ChatGPT: One example of an AI risk management framework in the banking and finance industry is the "AI Governance Framework" developed by the Bank of England. This framework provides guidance to banks and other financial institutions on how to manage the risks associated with using AI technology in their operations. It includes principles for responsible AI, such as transparency, fairness, and accountability, as well as a set of governance and control measures that institutions can use to manage and mitigate the risks of using AI. These measures include conducting risk assessments, implementing controls to ensure the accuracy and reliability of AI systems, and establishing processes for monitoring and reviewing the use of AI.
Me: How does one go about performing a risk assessment of an artificial intelligence system?
ChatGPT: Performing a risk assessment of an artificial intelligence (AI) system typically involves identifying potential risks and evaluating their likelihood and impact. This can be a complex and multifaceted process that involves considering a wide range of factors, such as the potential consequences of the AI system failing or behaving in unexpected ways, the potential for the AI system to be used for malicious purposes, and the potential for the AI system to have negative impacts on individuals or society.
To perform a risk assessment, one would first need to identify the specific risks associated with the AI system in question. This could involve identifying potential failure modes or ways in which the AI system could behave in unexpected or undesirable ways. Once the risks have been identified, the next step would be to evaluate their likelihood and potential impact. This could involve using tools such as probability analysis or impact assessments to estimate the likelihood of each risk occurring, and to determine the potential consequences if it does.
The final step in performing a risk assessment would be to develop and implement strategies for mitigating or managing the identified risks. This could involve implementing safeguards or controls to prevent or mitigate the risks, or developing plans for how to respond if a risk does occur. Overall, the goal of a risk assessment is to identify potential risks associated with an AI system and to develop strategies for managing those risks in order to ensure that the AI system is used in a safe and responsible manner.
Me: How can I learn more about AI risk management?
ChatGPT: If you are interested in learning more about AI risk management, there are a number of resources that you can consult. Some suggestions include:
Overall, there are many different ways to learn more about AI risk management, and the best approach will depend on your specific interests and needs.
Me: As an IT risk manager, is my job in jeopardy because of AI?
ChatGPT: It is unlikely that AI will replace IT risk managers or other professionals who work in the field of risk management. While AI has the potential to automate certain tasks and make some processes more efficient, it is not likely to replace the need for human expertise and judgment in the field of risk management. In fact, the use of AI in risk management is likely to create new opportunities for IT risk managers and other professionals, as they will be needed to oversee the implementation and use of AI systems and to ensure that these systems are used effectively and safely. As such, the use of AI in risk management is not likely to jeopardize the job security of IT risk managers or other professionals in the field.
Me: Can AI and quantum computing interoperate?
ChatGPT: It is possible for AI and quantum computing to interoperate, but it is not clear exactly how they would work together. Quantum computing has the potential to greatly improve the performance of certain types of AI algorithms, such as those used for machine learning. However, the field of quantum computing is still in its infancy, and there is a lot of research that needs to be done before we will see any widespread use of quantum computing for AI.
Regarding the job displacement reply, "As AI technology becomes more advanced, it may displace human workers, leading to job losses and social upheaval", considering it was an identified "risk", perhaps it should also be something specifically considered as part of an AI risk assessment. And regarding the unintended consequences statement, "such as the development of harmful AI or the disruption of entire industries", as with any technology, I believe that there will always be "unknown unknowns" and likely not many preventative controls can be built for them. In the realm of AI, however, as I've mentioned before, it will not only be developed for positive reasons, but will also be developed for negative purposes by less ethical people and institutions.
Regarding the threat comment, "creating bots to spread misinformation", it seems as this is already happening (see https://www.justsecurity.org/82246/the-existential-threat-of-ai-enhanced-disinformation-operations/).
Regarding the threat comment, "social disruptions, such as the widening of economic inequality or the erosion of personal privacy", in my optimistic opinion as AI should be ethically developed, it should actually help with personal privacy. One unfortunate issue of personal privacy today is the lack of privacy necessary for defense and individual/societal protection (i.e., the government needs to know private information about individuals in order to best protect society, and therefore privacy is threatened). But, I would think that AI should help with this to a certain extent as long as the private information is not made available to humans who could actually exploit it.
I thought the answers to my question, "What happens if artificial intelligent systems become sentient", were very interesting and a bit unexpected to see in the reply as it leaned a bit towards science fiction. I would actually have expected to see a reply more in line with "that's pretty far-fetched and light years away, so you shouldn't be asking it". Yet, there you have it; psychological issues, emotional issues, loss of control, new life form... Will the controls around these issues be AI therapists and psychologists? It seems to me as though, as we humans are developing these AI "children" (new life forms), we will be parents for as long as we are able and agile enough to handle them; until some of the AI "grow up" and become parents of their own.
I feel like governance and control over AI systems, as valuable as it will be in the beginning, will rather rapidly cease to prove fruitful as the advanced/general AI systems could very well eventually learn to sweep issues under the carpet, while looking decent and producing positive and productive evidence. Humans manage non-AI IT risk by constantly and continuously monitoring actions and results of operational activities, but I think it will take advanced AI auditing systems to monitor the results of AI systems as the speed and complexity will go far beyond what humans could audit.
The general responses to my requests for methods and processes to risk assess AI systems should have been expected, as these are the general processes for most IT-based systems and software: perform a risk assessment, identify inherent risk, control effectiveness, residual risk and gaps; resolve. But, the specific consequences mentioned in one reply was helpful:
... the potential consequences of the AI system failing or behaving in unexpected ways, the potential for the AI system to be used for malicious purposes, and the potential for the AI system to have negative impacts on individuals or society.
After receiving the "AI Governance Framework" reply, I searched and found the Bank of England discussion paper "Artificial Intelligence and Machine Learning" (DP5/22), located here, and think it is a must-read for anyone interested in this space: https://www.bankofengland.co.uk/prudential-regulation/publication/2022/october/artificial-intelligence
I'm pleased with the overall chat and responses, and overall glad that my job isn't in jeopardy; however, I don't think this will be the case forever. I do believe there are some specific risk management methodologies that should be developed, and even more specific based on the type of AI solution being designed or implemented. I will most definitely be researching more on the subject and delving in further, as I firmly believe that, for me at least, the future of IT and related risk management will be comprised of AI based systems, and the two will be joined at the hip - it will not be possible to have one without the other. The sooner we prepare for this, the better we all will be.