Agency is defined as the capacity, condition or state of acting or of executing power. Given that definition the following is what I derived from the excerpts in this segment. See if you agree.
My Takeaways: The primary battle of agency that is occurring and accelerating is between machine learning and individual human beings. By extension machine learning includes Big Tech. Big Tech knows that the battle is in process and is skillfully fighting it. Most people are ignorant of what is happening and even when told the usually dismiss it as a conspiracy theory. This is my interpretation of the excepts in how the battle is being fought.
Big Tech’s first move is to instill and increase reliance on thinking machines slowly and methodically by its users. As this transpires the loss of the user’s agency diminishes as they cede their individual power to the machines. This continues and in time the individual cedes their free will and most of their decision making to the machines. The more this occurs the more the machines control us.
But self-esteem and reputations are still critical to human existence even as free will and decision making is disappearing. To fill the vacuum, people turn more and more to social media counting the number of likes and/or followers to meet the psychological need. Popularity then becomes the measure of worth versus merit and popular reinforcing propaganda becomes more important than truth. This moves people to belonging to groups or tribes with like thinking to garner feedback thus lessening the goal of unity and denigrating individuality. Debate is squashed.
This ceding of power bit by bit coupled with social media popularity becoming more and more important results in a continuous transfer of power to a small group of people controlling the machines. These people become the ruling elite, whether they are Big Tech, the Chinese Communist Party, or some other small, powerful entity like a single political party. Is this what the excerpts are trying to convey to us? Is this what is happening? Is this a consequence of AI, a consequence of the manipulation of AI, or not a consequence at all?
Next: There are only two notable nations that are pursuing AI with vigor: the United States and China. But the two are taking very different paths in their approach to the technology. Segment 11 lays out the differences.
Happy Learning, Harley
BIG TECH & AI – SEGMENT 10 THE BATTLE FOR AGENCY – EXCERPTS
INTRODUCTION: Deep learning in machines is resulting in shallow knowledge in humans – an irony indeed. Cognitive skills like memory and attention span are atrophying, even as knowledge, authority and agency are being transferred from humans to machines. In effect, AI has managed to hack human psychology. The asymmetric relationship between gigantic digital platform businesses and their users, is of paramount importance. These companies deliver the most popular and widely used services in the world today, designed specifically to meet the demands of a public that is hungry for social media. However, beneath the surface the suppliers and consumers have opposing interests – in privacy, data rights, agency, intellectual property rights and free speech. This battle is distinct in one important aspect – one player is largely ignorant that such a battle is under way. The suppliers of digital services understand the game and play it skillfully, while most consumers of digital media are at odds. In fact, when people are informed that they are voluntarily surrendering psychological control of their lives, they usually dismiss it as a conspiracy theory.
The overarching power of the digital network derives from its use of data to exert psychological influence and its ability to hijack emotions. The more successful these new technologies are in providing what may be called artificial pleasures and gratifications, the more humans are subject to manipulation. The surrender of personal agency amounts to a loss of selfhood.
ARTIFICIAL EMOTIONAL INTELLIGENCE: Contrary to popular belief that computers are inherently incapable of acquiring emotional and social skills, considerable progress is being made to develop machines’ ability to sense and reason, and to mimic human emotional states and social situations. These capabilities are already being used for human-machine interactions, including situations that involve emotional caring, negotiation and persuasion. The term artificial emotional intelligence refers to the following kinds of abilities:
Predicting individual behavior by modeling emotional patterns. AI can develop emotional profiles of individuals that enable a machine to evaluate someone’s psychological state.
Substituting for human contact by providing emotional interaction. AI is becoming adept at reading and responding to emotions like a human.
Influencing moods and shifting people’s choices toward a product or idea with emotional value. With the ability to masquerade as human, AI can make people feel good about themselves boost their self-esteem and reinforce specific ideas. It can make them feel happy or sad or convince them to choose a certain movie, buy a specific product, fall in love with someone, start hating someone or something, and so forth.
The machine emulates human empathy by giving appropriate emotional responses, thereby enriching human-machine interactions. Affective computing systems can look at a face and determine the person’s mood – happy, sad, delighted, angry, etc. They can listen to a conversation between two people and successfully guess whether the people are related or not. Affective computing capabilities are being added to AI-generated movies with multiple plots and endings that dynamically change depending on the measured emotional state of the viewer.
An interesting example of an emotional machine is a Japanese robot available on the market to provide companionship to lonely men. The female-styled robot goes to work with them, listening to and learning the clients’ behavioral patterns, and offering them customized advice and comfort. Continuing feedback from the clients’ responses improves the robot’s effectiveness. The goal of these robotic companions is to serve as therapist, counselor, or friend and act as the first line of defense and protection in the event clients consider actions that might be a danger to themselves or others. The broad goal of all these fields and subfields is to understand human cognition, replace or augment humans with machines, and influence people’s choices. These functions are already being widely used for clinical medicine, political analysis, customer service, market research and business strategy. Considerable research, however, is still needed before models can understand and replicate human common sense, which is implicit knowledge and often unconsciously ingrained in human interactions.
EMOTIONAL HIJACKING: Dumbing Down the Masses: People’s memories are atrophying because they constantly depend on online searches and intelligent devices for information. As memory atrophies, attention span shortens, leading to a decline in study habits. At the same time, digital users artificially inflate their egos through social media platforms like Facebook with instant popularity measured by the number of likes or followers, sometimes running into the millions. While these activities enhance social status they also contribute to a greater dependency on, and addiction to, social media. Ultimately, such users become dependent on social media for their self-esteem and psychological well-being. This cognitive reengineering is not a passing fad but the likely future being driven by the latest AI technology. I use the term “moronization’ to refer to this dumbing down of large portions of society.
A machine’s emotional engagement with people advances through a few definable stages:
Learning about user’s emotions to build a psychological model or map of likely responses.
Establishing an emotional relationship that users learn to trust.
Offering personal, intimate advice, starting with gentle, harmless suggestions.
Substituting a mechanized form of companionship that seems human.
Manipulating human psychology by influencing users to behave according to mandates determined by the machine’s developers.
Many of us would be happy to transfer much of our decision-making processes into the hands of such a system, or at least consult with it whenever we face important choices. Google will advise us which movie to see, where to go on holiday, what to study in college, which job to accept, and even whom to date and marry.
Artificial Pleasures and Emotions: Marketing firms and political groups are buying access to services that model the consumers’ psychology and thereby improve their ability to engage the subcultures in their target group, taking marketing propaganda to a new level of sophistication. Once built, these psychological profiles weaponize the social media platform into a means for manipulating any individual’s private psychology. And for what purpose? For the benefit of whomever is in control of the machine. The beneficiary could be the digital platform itself or their commercial clients – advertisers, political candidates, or anyone else willing to pay to influence a target audience. Such models of personal psychology are always learning from the feedback and getting smarter. The more a model is used, the better it gets, just like human psychologists get better at understanding clients the longer they interact with them.
Addictive Behavior Programming: Once someone’s hidden desires are identified, the content is selected to satisfy them. Those who long to travel can do so via Augmented Reality goggles that will transport them to the place of their dreams. Based on the idea that people prefer excitement to boredom and contentment to anxiety, digital marketing companies substitute artificial gratification to intervene and manipulate users’ emotions. The ultimate goal is to instill and increase reliance on the system to make default choices for users. The long-term effect is a loss of users’ agency, and this eventually leads them toward autopilot mode. This type of predictive functionality can be expanded gradually to select which products users will buy, the books they will read, the movies they will watch, and so on.
Self-esteem and reputation are critical to human existence. It is natural to seek social approval and avoid feeling rejected. The number of likes, followers, or subscribers on social media is a primary measure of self-esteem, in today’s digital world. The drive for acceptance is why people are addicted to social media – continually checking for messages, notifications and emails, and anxious about whether their posts are liked and reposted or tweets re-tweeted. All this activity, mediated by the AI system, gratifies their biological desire for social validation. Social media does give us the power to articulate and assert ourselves, but this sense of empowerment is at the discretion of the platform. Ultimately, the more dependent we become on the provisional liberty granted us, the more the platform controls us. One way it controls us is through exploitation of our deepest desire to be treated with dignity and respect.
Digital giants can slant their algorithms to support content that aligns with their ideology. They filter what news is selected to be reported, the level of detail given to each item, and the nuance with which it is to be treated. They can also popularize specific fashions, trends, beliefs, interests, and fades. This power over our egos has granted Facebook what we can think of as bullyingrights. It routinely attempts to bully people into compliance with its rules on the boundaries of free speech, using tactics such as arbitrarily blocking users or reducing the visibility level of particular posts and videos. Facebook has developed algorithms to adjudicate and reinforce its idea of social justice. How popular various users are among their peers is influenced by how they obey or resist, Facebooks ideological norms.
Unfortunately, because social media is a system rigged to emotionally manipulate people on a large scale, it attracts and rewards mediocrity. Popularity among one’s peers is hardly and objective measure of merit. In fact, social media is counterproductive in the long run because it misleads individuals and even communities by providing an artificial barometer of success. The market for feeding pleasure and satisfying humanity’s cravings and aspirations is exploding, encompassing fantasy travel, virtual sports experiences, games, pornography, and whatever else people can imagine and desire. The playbook of the AI giants is eventually to have the maximum number of humans go through life on autopilot. Delegating one’s agency to a machine is like trusting a friend. Few people ever consider the ramifications of this transfer of power because they are blinded by the fulfillment of their desires. Most people are not balancing – nor even conscious of – the tradeoff between the gain in gratification and efficiency and their loss of free will.
Artificial Democracy: The approach of letting machines learn the psychology of individuals and then apply this knowledge for specific purposes has already had success in politics. Former US president Barack Obama’s 2012 presidential campaign was the first time that machine learning was used in a major democratic election. Using AI software, his campaign team compiles a database of voter information using social media and other sources. Machine learning algorithms then used this data to build individual voter profiles and predict responses to various kinds of canvassing. Each night, his team ran 66,000 simulations of the election, and the system determined where to assign resources – whom to call, whom to visit, what to say, etc. His victory was a watershed event in the use of AI for political campaigns.
Four years later, Donald Trump’s son-in-law, Jared Kushner, led Trump’s presidential campaign using AI on an even larger scale. Their database of 220 million citizens covered nearly every US voter. The machine learning system used 5,000 separate data points to build each individual’s psychological profile, enabling the campaign to accurately pinpoint where and how to advertise and what message to send to each individual voter. The amazing success of this method was a shock to everyone in politics. Source: Artificial Intelligence and the Future of Power: 5 Battlegrounds by Rajiv Malhotra (2021).
Within technology and especially when it comes to AI, we must continually remember to plan for both intended and unintended misuse. This is especially important today and for the foreseeable future, as AI intersects with everything in the global economy, the workforce, agriculture, transportation, banking, environmental monitoring, education, the military, and national security. This is why if AI stays on its current development track in the United States and China, the year 2069 could look vastly different than it does in the year 2019. As the structures and systems that govern society come to rely on AI, we will find that decisions being made on our behalf make perfect sense to machines – just not to us.
We’ve started to pass some major milestones in the technical and geopolitical development of AI, yet with every new advancement AI becomes more invisible to us. The ways in which our data is being mined and refined is less obvious, while our ability to understand how autonomous systems make decisions grows less transparent. This is particularly bad because at the moment there is no singular entity that can be held accountable for AI’s development. In the U.S., we have three epicenters of power: our federal government, the financial markets in New York City, and the West Coast between San Jose and Redmond at companies and universities where critical decisions are being made. Each nexus of power believes itself to be dominant. To wit: Microsoft has developed a corporate foreign policy and has departments like “Law Enforcement and National Security” and “Digital Diplomacy.” Meanwhile, many governments, including the UK and Denmark, have staffed diplomatic offices in Silicon Valley to operate as “tech embassies.”
Humanity is facing an existential crisis in a very literal sense, because no one is addressing a simple question that has been fundamental to AI since its very inception. What happens to society when we transfer power to a system built by a small group of people that is designed to make decisions for everyone? What happens when those decisions are biased toward market forces or an ambitious political party? It is both a warning and a blueprint for a better future. It questions our aversion to long-term planning in the US and highlights the lack of AI preparedness within our businesses, schools and government. It paints a stark picture of China’s interconnected geopolitical, economic, and diplomatic strategies as it marches on toward its grand vision for a new world order. Source: The Big Nine: How the Tech Titans & Their Thinking Machines Could Warp Humanity by Amy Webb.
The unabbreviated version of the above can be found in the pdf document below.