AI is considered the new superpower. The adoption of AI in various capacities is at 72% across industries, worldwide, according to one study, and it does not show signs of slowing down. Meanwhile, concerns about ethical issues surrounding AI are also high. According to a Pew Research report published in April 2025, more than 60% of the general public polled expressed concerns about misinformation, the security of their data, and bias or discrimination.
As database technologists and software developers, we play a crucial role in this evolution. A 2024 GitHub research survey indicated that more than 97% of respondents were already using AI for coding. Many of us may also be involved in developing AI-based software in various forms.
But how aware and conscious are we of ethical issues surrounding AI? Granted, our usage of AI may be driven by work-related reasons, but what about our own personal stances? Are we aware of ethical issues, and do these issues factor into our perception of AI in any way?
Studies reveal that developers exhibit only moderate familiarity with ethical frameworks, including fairness, transparency, and accountability. According to a 2025 survey of 874 AI incidents, 32.1% of developer participants had taken no action to address ethical challenges. (Zahedi, Jiang, Davis, 2025). Another study in 2024 proved the need for ‘comprehensive frameworks to fully operationalize Responsible AI principles in software engineering.’(Leca, Bento,Santos, 2024).
The purpose of this blog post is to look at ethical concerns related to AI as expressed by developers in the Stack Overflow Developer Survey, 2024.
The dataset comprises 41,999 observations (after cleansing for individuals under 18 and those without stances on AI) across developers in 181 countries. After the transformations, it appears as follows.

The questions I want to analyze, with the concerned variables, are as follows.
1. How do ethical concerns correlate to how favorable or unfavorable the stance is?
The outcome (Stance on AI) as related to the potential predictor (The six ethical concerns- biased results, misinformation, lack of attribution, energy demand, impersonation or likeness, and the potential for replacing jobs without creating new ones).
2. How does productivity as a gain correlate to how favorable or unfavorable the stance is?
The outcome (Stance on AI), as related to the potential predictor (Productivity Gain).
3 How does productivity as a gain, combined with ethical concerns, correlate to how favorable or unfavorable the stance is?
The outcome (Stance on AI) as related to the potential predictor (Productivity Gain), along with the six ethical concerns.
4 How does bias as an ethical issue and the age of the developer relate to the stance of AI?
The outcome (Stance on AI) as related to bias as an ethical issue, along with the respondent’s age bracket.
Methodology
The outcome being analyzed for all four questions is the AI stance, a Likert scale with 5 values in increasing order. (This is a dummy variable created based on verbiage-based responses in the original.) The ‘predictor’ variables, or the ones whose impact we are analyzing (Ethical values and productivity), are binary. Age, which is the variable considered in the last question, is a categorical one with age brackets.
I have used ‘odds ratios’ and ‘predicted probability’ to explain findings, as they are simple and easy to understand. ‘Odds Ratio’ means the chances or odds of a favorable AI Stance over a neutral or unfavorable one. Predicted probability is the chance of an event (in this case, a change in stance on AI) happening out of all possibilities.
Descriptive Statistics of the dataset
The top ten countries with respondents are as follows, with the US having a significantly high # of people. This may be since the US has a significantly high number of developers in general. It also means views overall may be mainly skewed in favor of US respondents.
For this analysis, I have not filtered the dataset by country, although this may be a worthwhile consideration for the next phase.

Stances on AI were overwhelmingly positive, with nearly half the respondents (48.2%) rating it as most favorable. Just 1.2% rated it as very unfavorable, with the rest falling between the two extremes.

65% of respondents reported productivity gains with AI.

There were a total of six ethical concerns: biased results, misinformation, lack of attribution, energy demand, impersonation or likeness, and the potential for replacing jobs without creating new ones. (Some more were too custom to be included for analysis.) The majority of respondents had more than one ethical concern. Misinformation (25.8%) and lack of attribution (21.03%) ranked highest among the concerns. Very few respondents (2.11%) had no ethical concerns.

Question 1: How do ethical concerns correlate to how favorable or unfavorable the stance is?
I decided to group all the ethical concerns and weigh them against the stance. This is because most respondents have multiple ethical concerns.
I also verified whether concerns overlap (i.e., the impact of one ethical concern is addressed by another – a term in statistics called ‘multicollinearity’). This was not the case, as demonstrated by the image below. (Values on the boxes are minimal compared to 1).

The results of the analysis were as follows.

Data Source: 2024 Stack Overflow Developer Survey
Odds are the ratio of an event happening to its not happening. In our case, the ‘event’ is a possibility of a lower stance. Except for biased results, all other ethical concerns have odds of less than 1, indicating a less favorable stance. Even with biased results, there is only a slight increase in stance, and that may be related to other factors we have not considered. Energy demand seems to have the highest correlation to lowered stances.
Question 2: How does productivity as a gain correlate to how favorable or unfavorable the stance is?
Predicted probability is the chance of an event (in this case, a change in stance on AI) happening out of all possibilities. The graph shows that the chances of a high stance (4 or 5) have a high probability of achieving higher productivity gains (tall green bars). However, it also shows that these stances are taken by those with no productivity gains (the red bars are also high for stance 4, although not very high for stance 5). Many people with no productivity gains exhibit a moderate stance (tall red bar with 3).

Data Source: 2024 Stack Overflow Developer Survey
People in the oldest age bracket (over 65 years old) appear to take a less favorable stance, which seems significantly higher compared to the youngest age bracket of 25-34 years old. How does productivity as a gain, combined with ethical concerns, correlate to how favorable or unfavorable the stance is?
Question 3: How does productivity as a gain, combined with ethical concerns, correlate to how favorable or unfavorable the stance is?
This question examines how stances on AI change when considering both ethical issues and productivity factors. Out of the six ethical issues, I chose two – concerns around bias and misinformation. The charts are as below, and were mostly similar. There are 4 buckets the data falls into –
1 Those with gains and concerns (red)
2 Those with gains and no concerns (green)
3 Those with no gains and concerns and (blue)
4 Those with no gains and no concerns. (purple)
Data Source: 2024 Stack Overflow Developer Survey
All things being equal, those with gains and concerns (red bar) show highly favorable stances (4 or 5).
All things being equal, those with gains and no concerns (green bar) also show neutral to favorable, but not highly favorable – perhaps other factors related to usage may be at play here.
All things being equal, those with no gains and concerns tend to be moderate to favorable, with some also being less favorable (blue bar).
All things being equal, those with no gains and no concerns seem to lean towards neutral to favorable. (purple).
It may seem odd that those with no gains and no concerns seem to have favorable stances. There may be other variables at play here that we have not considered, such as gains other than productivity, for example. This again is something to examine during the next phase of analysis.
Overall, productivity gains appear to show more favorable stances (green and red bars).
Question 4: How does bias as an ethical issue and the age of the developer relate to the stance of AI?
Adding age to the bias as an ethical issue and analyzing it with stances on AI is presented below.

Source: 2024 Stack Overflow Developer Survey
All things being equal, the odds of people in the oldest age bracket (over 65 years old) taking a less favorable stance seem significantly higher compared to those in the youngest age bracket (25-34 years old).
Results
1 How do ethical concerns correlate to how favorable or unfavorable the stance is?
This analysis focused on the outcome (Stance on AI) as correlated to the potential predictor (the six ethical concerns: biased results, misinformation, lack of attribution, energy demand, impersonation or likeness, and the potential for replacing jobs without creating new ones). Energy Demand as a concern appeared to have the highest correlation to less favorable stances.
All other ethical issues exhibited a correlation with less favorable stances, except for bias, which showed a slightly positive correlation.
2. How does productivity as a gain correlate to how favorable or unfavorable the stance is?
This analysis focused on the outcome (Stance on AI) as related to the potential predictor (Productivity Gain). Productivity gains are associated significantly with higher stances, although the lack of those doesn’t necessarily mean lower stances.
3 How does productivity as a gain, combined with ethical concerns, correlate to how favorable or unfavorable the stance is?
This question led to analyzing the outcome (Stance on AI) in relation to the potential predictor (Productivity Gain), along with the six ethical concerns.
Those with gains and concerns show highly favorable stances.
Those with gains and no concerns exhibit neutral to favorable attitudes, but not highly favorable ones.
Those with no gains and concerns seem moderate to favorable, with some also expressing less favorable views.
Those with no gains and no concerns seem to lean neutral to favorable.
4 How does bias as an ethical issue and the age of the developer relate to the stance of AI?
The last analysis examined the outcome (Stance on AI) as related to bias as an ethical issue, considering the respondent’s age bracket. People in the oldest age bracket (over 65 years old) appear to take a less favorable stance, which seems significantly higher compared to the youngest age bracket of 25-34 years old.
Key findings summarized
- The majority of respondents have expressed ethical issues.
- Energy Demand as a concern appeared to have the highest correlation to less favorable stances.
- All other ethical issues had a correlation with less favorable stances, except bias.
- Productivity gains seemed associated with higher stances despite ethical concerns.
- Bias and misinformation as concerns do not appear to significantly impact higher stances.
- Favorable stances appear to be high overall, regardless of productivity or ethical issues.
Further work
Examine the impact of other gains besides productivity.
Filter the dataset by specific countries for more insight into country-specific data.
Limitations
It is critical to bear in mind that correlation <> causation, and that favorable or less favorable stances do not necessarily have to reflect ethical concern or lack of it. However, given the patterns found, it is worth researching further to explore possible deeper relations to demographics (country, age), and also filtering the dataset by specific countries to gain more insight.
The dataset is also limited to developers, not specifically those working on AI, although some of them may be. Perspectives and findings may vary with a dataset of AI Developers.
The dataset is also heavily skewed in terms of respondents from the USA compared to those from other countries.
Resources on Responsible AI
Online Courses & Certifications
AI Ethics by the Linux Foundation (LFD117x)
Responsible AI by Microsoft
Books
‘Artificial Unintelligence’ by Meredith Broussard
‘Ethics of Artificial Intelligence’ by S Matthew Liao
‘Atlas of AI’ by Kate Crawford
Academic & Organizational Frameworks
https://datamodel.com/responsible-and-ethical-ai-frameworks/ (blog post by Karen Lopez, InfoAdvisors, with a link to several resources)
References
- S. T. (2025, May 6). AI Insights: 20 statistics transforming business in 2025. Https://Blog.Superhuman.com/Ai-Insights/. Retrieved July 1, 2025, from https://blog.superhuman.com/ai-insights/
- Mcclain, C., Kennedy, B., & Gottfried, J. (n.d.). Views of risks, opportunities and regulation of AI. http://Pewresearch.org. https://www.pewresearch.org/internet/2025/04/03/views-of-risks-opportunities-and-regulation-of-ai/
- Daigly, K., & G. S. (n.d.). Survey: The AI wave continues to grow on software development teams. http://Github.Blog. https://github.blog/news-insights/research/survey-ai-wave-grows/
- Värttö , A. (2025). Awareness of AI Ethics in Professional Software Engineering: A survey [Master’s thesis, University of Oulu]
- Gao, H., Zahedi, M., Jiang, W., Lin, H. Y., Davis, J., & Treude, C. (2025). AI Safety in the Eyes of the Downstream Developer: A First Look at Concerns, Practices, and Challenges.
- Leça, M. D., Ben to, M., & Santos, R. D. (2024). Responsible AI in the Software Industry: A Practitioner-Centered Perspective. ArXiv. https://arxiv.org/abs/2412.07620ArXiv. https://arxiv.org/abs/2503.19444

