AI Is Changing Our Brains

0

Artificial intelligence is shaping not only how we think, but how we see ourselves. Will you one day be held accountable for something your digital self did without you knowing?

In 1976, philosopher Julian Jaynes issued the provocative theory that recent ancestors lacked self-awareness. Instead, they mistook their inner voices for outside sources–the voice of God, say, or the ghosts of their ancestors. Jaynes called his theory “bicameralism” (Westworld fans will recall an episode from the last season called “The Bicameral Mind”) and, in his telling, it persisted in early humans until about 3,000 years ago. 

We are in a similar pre-conscious state now, but the voice we hear is not the other side of our brains. It’s our digital self–a version of us that is quickly becoming inseparable from our physical self. I call this comingled digital and analog self our “Meta Me.” The more the Meta Me uses digital tools, the more conscious it will become–a development that will have tremendous social, ethical, and legal implications. Some are already coming to light.

HOW YOUR META ME WORKS

We may think we’re in control but we’re not. The notion that we have digital versions of ourselves has been around as long as we’ve had social media accounts, but we’ve always been the ones typing in the update and clicking the like button. But digital systems are becoming smarter, and they’re taking decisions away from us. LinkedIn prompts us to “congratulate” contacts when they reach a milestone. Facebook does the same with birthdays. Our interaction has been reduced to just one click. It’s very easy to see a time when the need for that last click will go away and our Meta Me will take over the duty of congratulating.

Most people aren’t aware of how much of their decision-making they have already relinquished to their Meta Me. Smart thermostats set the temperatures in our homes. Media channels queue up what we should watch next. Our phones navigate for us. If you’re dependent on Waze to get to a restaurant across town, there’s a good chance you don’t know where you are in the city, and you don’t need to know. At that point, you’re not the one making the decisions. And the more we rely on computers, the more fully realized our Meta Me will become, and the more we will cede our day-to-day decision-making to it. Eventually, we’ll do this without knowing, or even caring, until our Meta Me is representing us entirely and independently from our physical self.

WHY DOES IT MATTER?

Smart systems aren’t just acting as personal assistants working on our behalf. Software is created by people. There are businesses, political groups, and other potentially bad actors that are trying to influence our decisions.

Let’s say your Meta Me arranges for a self-driving car to pick you up from work and take you home. That car might be sponsored by a company that wants to drive you down a street filled with billboards and storefronts advertising their brands, rather than taking you on the fastest and most efficient way home.

The manipulative potential of AI became sharply clear after the 2016 U.S. election, when armies of bots were used to spread viral political ads. Computer algorithms analyzed social media behavior to develop predictions about people, and then customize ads in real time based on the response. Samuel Woolley, an expert in computational propaganda at Oxford’s Internet Institute, explained to Vox why AI-powered machines are such an effective tool for political communication: “One person controlling a thousand bot accounts is able to not just affect the people in their immediate circle but also potentially the algorithm of the site on which their [sic]operating.”

If your Meta Me is making decisions on your behalf—even recommending who and what to vote for—what data is that decision based on?

You might think you’re immune to the power of the Meta Me–that you wouldn’t grant a computer so much control over your life. But consider research going on in Arlington, Virginia, at the Defense Advanced Research Project Agency (DARPA). There, scientists are exploring implanting computer chips in the human brain to read their speech-related brain waves, allowing people to “speak” to others through a computer without actually saying a word. The project is called “Silent Talk” and the goal is to create user-to-user communication through thought. Yes, such a technology still feels like science fiction, but it signifies the depth of our willingness to integrate with computing systems.

WILL YOU BE RESPONSIBLE FOR SOMETHING YOUR META ME DID WITHOUT YOU KNOWING?

If we choose to be dependent on machines and give more independence to our digital avatars, we have to also be aware of the larger implications of what it means to have a digital representative. This is not a far-off future. It may be early days, but we are already bumping into situations in which our digital selves are being treated as our real selves.

For example, this year the Department of Homeland Security confiscated an American couple’s mobile devices and demanded their passwords at the Canadian border, claiming it was the government’s right to search belongings if they had probable cause to do so. The ACLU sued on behalf of the couple, saying that a phone is a far more personal item than a bag or suitcase, and that a person’s digital life (as represented on their phone) should be included in privacy rights provided by the Fourth Amendment.

Similarly, this fall the Supreme Court is hearing an appeal of a man who claims his privacy was violated after police used location records of his cell phone to convict him of robbery. His argument is that he’s in jail because the prosecutor proved where he his cell phone was instead of where he was. But is there a difference? Will we one day be held accountable for something our Meta Me said or did without us knowing?

Clearly, the Meta Me is not just a passing curiosity. It’s the future of who we are. This is a troubling thought when you consider how terrible we are at regulating our existing digital behavior. To ensure the Meta Me lives harmoniously in the world, we must adopt new social constructs. Importantly, we must also pursue the same legal protections that were hard-won over our corporeal selves.

This article first appeared in www.fastcodesign.com

Seeking to build and grow your brand using the force of consumer insight, strategic foresight, creative disruption and technology prowess? Talk to us at +9714 3867728 or mail: info@groupisd.com or visit www.groupisd.com

About Author

Mark Rolston

Mark Rolston is cofounder and chief creative officer of argodesign. Previously, he led frog's global creative vision.

Comments are closed.