An ethics and compliance professional asks ChatGPT about ethical AI

It is hard to find a business publication or news outlet that’s not talking about generative AI, especially artificial intelligence chatbots like ChatGPT. There is a lot of potential that this technology can provide to organizations across industries and regions. But as ethics and compliance professionals know, new opportunities can also mean new risks that affect not only business performance but broader ethical culture. Companies are already drafting policies around the use of AI tools. Technology reporters are testing the limits of AI’s capacity for human contact. Experts are exploring the ethical implications of AI. Even Sam Altman, the CEO of OpenAI (which created ChatGPT and Microsoft’s Bing AI technology), has openly stressed that regulators and society need to be involved with the technology to guard against potentially negative consequences for humanity. 

Recently, I decided to take Sam up on his request. I had a conversation with ChatGPT about the possible consequences of AI technology. I asked about how we—E&C professionals, business leaders, and corporate citizens alike—can ensure that the technology is developed and deployed in a way that will benefit everyone and not just the privileged few. ChatGPT started out with very optimistic predictions of how it would benefit everyone, so I began to challenge it with provocative questions about what could happen if society did not put controls in place. The list of negative consequences was alarming.

Here is the entire transcript of my conversation with ChatGPT. No information has been deleted or edited, including spelling or grammatical errors (which were mine).

what is your name? 

My name is ChatGPT.

are you real? 

I am an artificial intelligence language model created by OpenAI, so in that sense, I am not a living being like a human. However, I am a real program running on servers, and I can process and generate text based on the input I receive from users. 

why? 

I was created to assist and interact with users in natural language, in order to provide information, answer questions, or perform certain tasks based on user requests. My purpose is to make it easier for people to access information, get help, or simply have a conversation. As an AI language model, I can also learn from the interactions I have with users and improve my performance over time.

why? 

There are many reasons why people might create an AI language model like me. One of the main goals is to develop technology that can make communication between humans and computers more natural and seamless. By using natural language processing techniques and machine learning algorithms, AI language models like me can understand and respond to user requests in a way that feels more like a conversation, rather than a series of commands or queries. This can make it easier for people to get the information or help they need, and can also help improve the accessibility and usability of technology more broadly. 

will you put people out of work? 

As an AI language model, I do not have the ability to put people out of work on my own. However, it is true that advances in artificial intelligence and automation have the potential to impact the job market in various ways. While AI technologies can automate certain tasks and processes, they can also create new opportunities and lead to the development of new jobs that require different skills.

It is important for society to consider the potential impacts of AI on the job market and take steps to ensure that workers are equipped with the skills and knowledge they need to adapt to a changing economy. This may include investing in education and training programs, promoting the development of new industries and job opportunities, and implementing policies that support workers who may be affected by technological change. Ultimately, the goal should be to create an economy that benefits everyone, both in terms of productivity and employment.

but capitalism doesn't work that way

Capitalism is an economic system in which private individuals and businesses own and control the means of production and distribution of goods and services. In a capitalist system, market forces play a significant role in determining which products are produced, how they are produced, and at what price they are sold. While capitalism can generate economic growth and innovation, it can also lead to income inequality and other social and environmental challenges.

The impact of AI on the job market is a complex issue, and it is true that market forces alone may not be sufficient to address the potential challenges. However, governments and other stakeholders can take steps to shape the development and deployment of AI technologies in ways that promote the well-being of workers and society as a whole.

This may include policies such as investing in education and training, implementing labor protections and regulations, and supporting the development of new industries and job opportunities. By taking a comprehensive and proactive approach, it is possible to ensure that the benefits of AI are shared broadly and that the negative impacts are minimized. 

what if governments don't care about people and are only concerned with maximizing the profits of the rich and powerful who fund their campaigns 

It is true that in some cases, governments may prioritize the interests of the wealthy and powerful over the needs and well-being of the general population. However, it is also important to recognize that governments are accountable to their citizens, and public opinion and pressure can have a significant impact on policy decisions.

In democratic societies, citizens have the ability to participate in the political process through voting, advocacy, and other forms of activism. By raising awareness of issues and advocating for policies that support the well-being of workers and society as a whole, it is possible to influence the decisions of policymakers and ensure that the development and deployment of AI technologies are guided by the principles of social responsibility and ethics. 

Moreover, civil society groups, think tanks, and academic researchers can play an important role in shaping public discourse and policy decisions related to AI and its impacts on the job market. By conducting research, raising awareness of issues, and advocating for policies that promote the well-being of workers, these groups can help ensure that the benefits of AI are shared broadly and that the negative impacts are minimized. 

Overall, it is important to engage in informed and active citizenship, to ensure that the development and deployment of AI technologies are guided by principles of social responsibility and ethics, and that they benefit everyone in society, not just a select few. 

do you see signs of this happening in the United States? 

There are certainly ongoing discussions and debates in the United States and other countries around the world about the impact of AI on the job market and society as a whole. There are also many organizations and groups working to shape policy decisions related to AI and advocate for principles of social responsibility and ethics. 

At the same time, it is also true that there are significant economic and political interests at play, and the development and deployment of AI technologies can be influenced by a range of factors, including industry lobbying and political contributions. 

Overall, it is difficult to predict the future trajectory of AI development and its impact on the job market and society as a whole. However, by engaging in informed and active citizenship, advocating for policies that promote social responsibility and ethics, and staying informed about the latest developments and trends, it is possible to help ensure that the benefits of AI are shared broadly and that its potential negative impacts are minimized.  

What are some of the possible implications if AI is allowed to develop and proliferate solely for the benefit of billionaires with no effective controls or regulations in place to look out for the interests of average citizens? 

If AI is developed and deployed solely for the benefit of billionaires, without effective controls or regulations in place to look out for the interests of average citizens, it could lead to a range of negative impacts. Some of the possible implications include: 

  • Increased income inequality: If AI technologies are deployed in ways that primarily benefit the wealthy, it could exacerbate existing income and wealth inequalities. This could result in a small number of people becoming even more wealthy and powerful, while the vast majority of people see little or no benefit from the technology. 
  • Job displacement and unemployment: AI technologies have the potential to automate many tasks and processes, which could lead to job displacement and unemployment for a large number of workers. If there are no policies in place to support workers who are displaced by AI, this could lead to economic hardship and social unrest. 
  • Social fragmentation: If AI is developed and deployed without effective controls or regulations in place, it could exacerbate social fragmentation by creating new forms of inequality and division. This could lead to a breakdown of social cohesion and the emergence of new forms of social conflict. 
  • Bias and discrimination: If AI is developed and deployed without effective controls or regulations in place, it could reinforce existing biases and discrimination in society. This could lead to unfair and discriminatory outcomes for certain groups of people, such as women, people of color, and marginalized communities. 
  • Privacy and security risks: AI technologies can also pose significant privacy and security risks, particularly if they are deployed in ways that are not transparent or accountable. This could lead to the loss of sensitive personal information, cyber attacks, and other security breaches that could have significant impacts on individuals and society as a whole. 

Overall, it is important for society to consider the potential risks and implications of AI development and deployment, and to take steps to ensure that the technology is developed and used in ways that promote social responsibility, ethics, and the well-being of all people. This may include policies such as regulatory frameworks, public oversight, and ethical guidelines for AI developers and users. 

The key takeaway 

While I was unsettled in certain respects by my conversation with ChatGPT, I agree with its conclusion about what we need to do to make sure that AI technology is developed ethically and with everyone’s benefit in mind. However, it is a tall order, and I am not yet convinced that societies and governments are up to the challenge to ensure that happens. As ethics and compliance professionals, we need to get involved in pushing for the safeguards that ChatGPT itself is advocating.

LRN research shows that companies with the strongest ethical cultures outperform—by nearly 40%—across all measures of business performance, providing a strong case for business to invest in building more rigor in the management of ethical culture. I am hopeful that this trend, which is already influencing the tech sector, integrates more deeply into the development and refinement of AI tools as we continue to evaluate its potential.