AI 'can stunt the skills necessary for independent self-creation': Relying on algorithms could reshape your entire identity without you realizing

A woman standing in an abstract artificially constructed environment
Can we trust algorithms to make the best decisions for us, and what does that mean for our agency? (Image credit: Getty Images/gremlin)

The rise of artificial intelligence (AI) poses questions not just for technology and the expanded plethora of possibilities it brings, but for morality, ethics and philosophy too. Ushering in this new technology carries implications for health, law, the military, the nature of work, politics and even our own identities — what makes us human and how we achieve our sense of self.

"AI Morality" (Oxford University Press, 2024), edited by British philosopher David Edmonds, is a collection of essays from a "philosophical task force" exploring how AI will revolutionize our lives and the moral dilemmas it will trigger, painting an immersive picture of the reasons to be cheerful and the reasons to worry. In this excerpt, Muriel Leuenberger, a postdoctoral researcher in the ethics of technology and AI at the University of Zurich, focuses on how AI is already shaping our identities.

Her essay, entitled "Should You Let AI Tell You Who You Are and What You Should Do?" explains how the machine learning algorithms that dominate today's digital platforms — from social media to dating apps — may know more about us than we know ourselves. But, she posits, can we trust them to make the best decisions for us, and what does that mean for our agency?


Your phone and its apps know a lot about you. Who you are talking to and spending time with, where you go, what music, games, and movies you like, how you look, which news articles you read, who you find attractive, what you buy with your credit card, and how many steps you take. This information is already being exploited to sell us products, services, or politicians. Online traces allow companies like Google or Facebook to infer your political opinions, consumer preferences, whether you are a thrill-seeker, a pet lover, or a small employer, how probable it is that you will soon become a parent, or even whether you are likely to suffer from depression or insomnia.

With the use of artificial intelligence and the further digitalization of human lives, it is no longer unthinkable that AI might come to know you better than you know yourself. The personal user profiles AI systems generate could become more accurate in describing their values, interests, character traits, biases, or mental disorders than the user themselves. Already, technology can provide personal information that individuals have not known about themselves. Yuval Harari exaggerates but makes a similar point when he claims that it will become rational and natural to pick the partners, friends, jobs, parties, and homes suggested by AI. AI will be able to combine the vast personal information about you with general information about psychology, relationships, employment, politics, and geography, and it will be better at simulating possible scenarios regarding those choices.

So it might seem that an AI that lets you know who you are and what you should do would be great, not just in extreme cases, à la Harari, but more prosaically for common recommendation systems and digital profiling. I want to raise two reasons why it is not.

Trust

How do you know whether you can trust an AI system? How can you be sure whether it really knows you and makes good recommendations for you? Imagine a friend telling you that you should go on a date with his cousin Alex because the two of you would be a perfect match. When deciding whether to meet Alex you reflect on how trustworthy your friend is. You may consider your friend's reliability (is he currently drunk and not thinking clearly?), competence (how well does he know you and Alex, how good is he at making judgements about romantic compatibility?), and intentions (does he want you to be happy, trick you, or ditch his boring cousin for an evening?). To see whether you should follow your friend's advice you might gently interrogate him: Why does he think you would like Alex, what does he think you two have in common?

This is complicated enough. But judgements of trust in AI are more complicated still. It is hard to understand what an AI really knows about you and how trustworthy its information is. Many AI systems have turned out to be biased — they have, for instance, reproduced racial and sexist biases from their training data — so we would do well not to trust them blindly. Typically, we can't ask an AI for an explanation of its recommendation, and it is hard to assess its reliability, competence, and the developer's intentions. The algorithms behind the predictions, characterizations, and decisions of AI are usually company property and not accessible by the user. And even if this information were available, it would require a high degree of expertise to comprehend it. How do those purchase records and social media posts translate to character traits and political preferences? Because of the much-discussed opacity, or "black box" nature of some AI systems, even those proficient in computer science may not be able to understand an AI system fully. The process of how AI generates an output is largely self-directed (meaning it generates its own strategies without following strict rules designed by the developers), and difficult or nearly impossible to interpret.

Create Yourself!

Even if we had a reasonably trustworthy AI, a second ethical concern would remain. An AI that tells you who you are and what you should do is based on the idea that your identity is something you can discover — information you or an AI may access. Who you really are and what you should do with your life is accessible through statistical analysis, some personal data, and facts about psychology, social institutions, relationships, biology, and economics. But this view misses an important point: We also choose who we are. You are not a passive subject to your identity — it is something you actively and dynamically create. You develop, nurture, and shape your identity. This self-creationist facet of identity has been front and centre in existentialist philosophy, as exemplified by Jean-Paul Sartre. Existentialists deny that humans are defined by any predetermined nature or "essence." To exist without essence is always to become other than who you are today. We are continually creating ourselves and should do so freely and independently. Within the bounds of certain facts — where you were born, how tall you are, what you said to your friend yesterday — you are radically free and morally required to construct your own identity and define what is meaningful to you. Crucially, the goal is not to unearth the one and only right way to be but to choose your own, individual identity and take responsibility for it.

AI can give you an external, quantified perspective which can act as a mirror and suggest courses of action. But you should stay in charge and make sure that you take responsibility for who you are and how you live your life. An AI might state a lot of facts about you, but it is your job to find out what they mean to you and how you let them define you. The same holds for actions. Your actions are not just a way of seeking well-being. Through your actions, you choose what kind of person you are. Blindly following AI entails giving up the freedom to create yourself and renouncing your responsibility for who you are. This would amount to a moral failure.

Ultimately, relying on AI to tell you who you are and what you should do can stunt the skills necessary for independent self-creation. If you constantly use an AI to find the music, career, or political candidate you like, you might eventually forget how to do this yourself. AI may deskill you not just on the professional level but also in the intimately personal pursuit of self-creation. Choosing well in life and construing an identity that is meaningful and makes you happy is an achievement. By subcontracting this power to an AI, you gradually lose responsibility for your life and ultimately for who you are.

A very modern identity crisis

You may sometimes wish for someone to tell you what to do or who you are. But, as we have seen, this comes at a cost. It is hard to know whether or when to trust AI profiling and recommendation systems. More importantly, by subcontracting decisions to AI, you may fail to meet the moral demand to create yourself and take responsibility for who you are. In the process, you may lose skills for self-creation, calcify your identity, and cede power over your identity to companies and government. Those concerns weigh particularly heavy in cases involving the most substantial decisions and features of your identity. But even in more mundane cases, it would be good to put recommendation systems aside from time to time, and to be more active and creative in selecting movies, music, books, or news. This in turn, calls for research, risk, and self-reflection.

Of course, we often make bad choices. But this has an upside. By exposing yourself to influences and environments which are not in perfect alignment with who you currently are you develop. Moving to a city that makes you unhappy could disrupt your usual life rhythms and nudge you, say, into seeking a new hobby. Constantly relying on AI recommendation systems might calcify your identity. This is, however, not a necessary feature of recommendation systems. In theory, they could be designed to broaden the user's horizon, instead of maximizing engagement by showing customers what they already like. In practice, that's not how they function.

This calcifying effect is reinforced when AI profiling becomes a self-fulfilling prophecy. It can slowly turn you into what the AI predicted you to be and perpetuate whatever characteristics the AI picked up. By recommending products and showing ads, news, and other content, you become more likely to consume, think, and act in the way the AI system initially considered suitable for you. The technology can gradually influence you such that you evolve into who it took you to originally be.

Disclaimer

This excerpt, written by Muriel Leuenberger, has been edited for style and length. Reprinted with permission from "AI Ethics" edited by David Edmonds, published by Oxford University Press. © 2024. All rights reserved.


AI Morality
$18.75 at Amazon

There is no more important issue at present than AI. It has begun to penetrate almost every sphere of human activity. It will disrupt our lives entirely. David Edmonds brings together a team of leading philosophers to explore some of the urgent moral concerns we should have about this revolution. The chapters discuss self and identity, health and insurance, politics and manipulation, the environment, work, law, policing, and defense. Each of them explains the issue in a lively and illuminating way, and takes a view about how we should think and act in response. Anyone who is wondering what ethical challenges the future holds for us can start here.

Muriel Leuenberger

Leuenberger is a postdoctoral researcher in the Digital Society Initiative and the Department of Philosophy of the University of Zurich. Her research interests are in ethics of technology / AI, medical ethics (neuroethics in particular), philosophy of mind, meaning in life, philosophy of identity, authenticity, and genealogy.