At the moment it is not infallible, and this is known. If only because it is ChatGPT itself that highlights the following at the bottom of its login page (in a very reduced font, banking/insurance contract clause style): “ChatGPT can make mistakes. Consider verifying important information.” The meaning of important information is quite arcane: what would it be?

And that’s okay: it’s only a matter of time. The algorithm still needs to learn and be well calibrated, but in the end, near-perfection will be achieved. Or not?

A critical aspect already highlighted in the literature, and by users, concerns the potential biases that AI brings with it; and indeed, artificial intelligence is in any case the child of the man who generated it. And talis pater

In a recent paper, L. R. Jain and V. Menon of the University of Alabama, list a number of biases connected to different applications of artificial intelligence. Let’s see what it is


Data or technical bias: These occur when the training data used to create the algorithm behind the AI application is biased. This is colloquially captured in the statement “bias in, bias out” . For example, in the event that an algorithm for facial recognition is prepared on a dataset that incorporates predominantly white faces, the AI/ML model may have difficulty recognizing the faces of individuals with darker skin tones.


Historical biases: Historical data is commonly used to train machine learning algorithms, but caution is required due to potential pre-existing biases.  Amazon’s 2014 automated candidate scoring system exemplifies this problem. Trained on a decade of data, the algorithm favored male candidates due to the over-representation of men in technical positions at Amazon. As a result, non-male applicants were discriminated against, leading to the project’s abandonment in 2015.


Social biases: These occur when the algorithm is programmed, intentionally or unintentionally, to discriminate against certain groups of people. According to psychologists, there are 180 different types of cognitive biases, some of which can be included in theories and influence the development of artificial intelligence systems. For example, a hiring algorithm programmed to give preference to candidates of a certain race or gender and/or avoid candidates with disabilities, etc.

Automation biases: these occur when decision-makers blindly trust algorithmic results, even when the algorithm produces incorrect or biased results (this, I think, will become very widespread, especially in the financial field. Like: the artificial intelligence told me that, it can’t be wrong). For example, a judge who relies solely on a recidivism prediction algorithm to make sentencing decisions may inadvertently perpetuate racial disparities in the criminal justice system.

Confirmation bias: Algorithms can perpetuate existing biases by selectively incorporating information that reinforces them. For example, dialect interpretation models associated female names with traits such as “parents” and “marriages,” while male names were linked to “professional” and “salary.” This suggests that the model was trained on data that reflects these gender stereotypes, potentially contributing to gender wage disparities.

The authors conclude by noting that these bias generators have serious ramifications, especially in sectors such as education, advertising, design, and the arts, where erroneous results can reinforce negative stereotypes and contribute to unequal representation in visual media.

To find and correct any biases, AI models must be carefully examined and scrutinized. Algorithmic biases can be reduced by taking measures such as using equity-conscious fine-tuning techniques and re-evaluating how demographics are represented during transfer learning. To build unbiased and equitable AI models that embrace diversity, reduce bias, and promote equitable outcomes across all demographic groups, thoughtful design considerations are essential.

As if to say: I hope I’ll get away with it.



Obviously, this article has been elaborated an translated (also) thanks to artificial intelligence. If there are mistakes, you know who is to blame!