Of the many, many issues facing the world today, AI Policy Making very well maybe one that gets little attention. In fact, many people might find it boring. Yet, it may well represent one of the most important for the worlds and our nations long term future.
We learned in the original Big Tech & AI Series about the power artificial intelligence has to shape future world societies in profound ways. Additionally, we learned about how its downsides can ruin those same societies. Click here to read more.
World-wide and individual nation regulation is a must to achieve the benefits of the technology and minimize the potential negatives. The application of the technology is rapidly advancing in today’s world, yet the regulation of it is minimal. That is a prescription for failure and technological abuse.
This epilogue provides some insights on the issues and the need for regulation in the areas of governance, human rights, healthcare, education, automobile standardization, defense, and ethics. Next: This completes the epilogues on Big Tech and AI. Next, we move on to four epilogues for the past China series. The first is titled “U.S. Economic and Homeland Security.”
Happy Learning, Harley
BIG TECH & ARTIFICIAL INTELLIGENCE EPILOGUES – EPILOGUE 2 AI POLICY MAKING – EXCERPTS
NOTE: Excerpts are from Turning Point: Policymaking in the Era of Artificial Intelligence by Darrell M. West and John R. Allen (2020) INTRODUCTION: A group of technologists was asked by the Pew Research Center whether artificial intelligence (AI) would empower individuals and enhance human capacities. Sixty-three percent of these experts answered in the affirmative and indicated most people would be better off. Yet beneath that general optimism, a number of prominent thinkers also expressed doubts regarding these rosy scenarios. They worried about wealth concentration, algorithmic bias, and political authoritarianism. This litany of concerns suggests we are at a watershed moment in human history.
We do not think technologies in and of themselves will preordain utopia or dystopia, but people will control the future through the choices they make in the coming period. We believe regulations and policies will be crucial in determining AI’s future, as will corporate decisions, legal liability rules, and consumer sentiments. The resulting tapestry will be a product of policy and operational decisions made in the next several years. It is important to limit the possibility of an inadvertent or accidental dystopia that could emerge from poorly-thought-out choices.
POLICY ISSUES: Governance: Today’s system is in decline. Many technology decisions have migrated from the world of government to private companies, which in some cases act as proto states in and of themselves. Their coders, engineers, and computer scientists make decisions all the time that affect the way people communicate, what information is at their disposal, how they buy products and the manner in which democracy functions. Few of their choices are subject to detailed government oversight, since, until recently, many countries have taken a hands-off stance regarding private-sector technology innovation. As AI advances, these kinds of governance issues must be resolved.
Biases in data and Algorithms: In some instances, certain AI systems have enabled discriminatory or biased practices. Issues of racial discrimination have come up regarding facial recognition software. In looking at differences by race, gender, and age, analysists discovered that algorithms misidentified African Americans and Asian American faces “10 to 100 times more than Caucasian faces,” that it had greater errors for women than for men, and that it “falsely identified older adults up to 10 times more than middle-aged adults.
Offensive Applications: Discrimination and privacy intrusions are not the only risks in the digital world. There are “deepfakes,” computer-created artificial videos or other digital material that show well-known people either saying or doing something that is highly offensive but that never actually happened. These types of inventions show how far the technology has come and what the moral risks are.
Legal Liability: An Uber-related fatality in Arizona represents an important test case for legal liability. The state actively recruited Uber to test its autonomous vehicles and gave the company considerable latitude in road testing. Yet in the case of accidents, it is unclear who gets sued; the human backup driver, the state of Arizona, the Phoenix suburb where the accident took place, Uber, software developers, data analytics experts or the auto manufacturer. Given the multiple people and organizations involved in autonomous vehicles, there are many legal questions to be resolved in AI-directed harms.
Geopolitical Considerations: Researchers Rob Atkinson and Caleb Foote examined 36 indicators between China and the U.S. and found “on all indicators China has closed the gap or, in some cases, extended its lead over the U.S.” As an illustration, they looked at R&D investments, workforce development, the number of scientific articles, patents, trade, and manufacturing, and found the Chinese have advanced rapidly in every area. No longer is a country that copies to succeed: rather, it has a skilled workforce with great technological expertise.
Human Control: As AI advances, the fear is algorithms will make so many decisions that individuals will be supplanted by hidden or indecipherable code.
Summation: One crucial point to remember across all of these areas is that there are particular ways people can retain control and are likely to do so in the future. This includes policy decisions, regulatory actions, legal liabilities, corporate self-policing, and public opinion that demands reasonable safeguards. If we utilize such actions, there is little reason to fear the technologies that are moving full steam ahead. Our task is to figure out how to gain digital benefits while minimizing AI’s detrimental, discriminatory, or dangerous features.
HEALTHCARE POLICY: AI is bringing new tools into the medical arena. Through natural language processing, it is possible to analyze thousands of scientific articles for insights that can lead to new drugs, medical treatments, or clinical processes. That could expedite drug discovery and improve the treatment of large numbers of patients. Yet medical algorithms also introduce a number of possible biases into healthcare based on race, gender, age, income, and geography.
As AI algorithms move into healthcare, it will be important to maintain a “duty of care” and a sense of fiduciary responsibility among medical providers. The typical treatment may vary depending on whether the doctor consults traditional experts of data algorithms. Clinical decision support software may incorporate different results or considerations in recommendations and therefore introduce new kinds of legal liability. As AI in healthcare becomes more prevalent, it will be crucial to work out the patient ramifications.
EDUCATIONAL REFORM: Both international competitiveness and national security depend on having an equitable inclusive, and forward-looking educational system. One of AI’s virtues is its ability to personalize instruction for individual students. based approaches is they equate time spent with subject area knowledge. As an alternative, a “mastery-based” approach would work better than one based on time. The winners of this upcoming AI-defined era in human history will be the countries and companies that can create the most powerful algorithms, assemble the most talent, collect the most data, and marshal the most computing power. Progress in these areas is especially important given the proliferation of international challenges. The way in which education prepares the next generation of leaders will directly determine whether the U.S. retains its leadership in critical fields of relevance in the emerging digital environment. A future in which the U.S. performs poorly in the race for AI technology would create a situation of cyber inferiority and a threat to national well-being and security.
TRANSPORTATION – DRIVERLESS VEHICLES: United States: The biggest American challenge is overcoming the fragmentation of diverse state governments and having uniform guidelines across geographic boundaries. Public officials should address questions of who regulates, how they regulate, legal liability, and privacy. In the past two years, 23 states have introduced 53 pieces of legislation that affect self-driving cares – all of which include different approaches and concepts. None of those laws feature common definitions, licensing structures or sets of expectations for what manufacturers should be doing.
DEFENSE TECHNOLOGY: AI will dramatically change the speed of war. It will not only enhance the human role in conflict but will also leverage technology as never before. Our national defense stands as one of the most consequential areas of development for the 21st century. AI, once fully realized, has the potential to be the single greatest force for military and security forces in human history. AI technology could, for example, facilitate autonomous operations, lead to faster, more informed military decision-making, and increase the speed and scale of military action.
Hypersonics: Hypersonics can achieve speeds ranging from five to ten times the speed of sound. They represent a game-changing technology for national defense, especially when paired with AI and nuclear weapons. What’s more, the most advanced hypersonics are capable of being so unpredictable in their trajectories that no known modern radar of missile defense system can accurately discern their intended targets, let alone intercept them prior to reaching their destination. Hypersonics may be one of the defining technologies of the 21st century and bring with them as yet unanticipated and potentially untold impacts upon global nuclear stability.
5G Technology: China may soon achieve total domination in the 5G technology space and in the process secure valuable competitive and security advantages across a number of sectors. By owning and operating their entirety of a country’s 5G network, all of their internally networked data could in theory be sent back to China for analysis and processing, as well as machine learning. This presents serious concerns given China’s already prolific access to big data sets, never mind the immediate intelligence concerns implicit in unfettered network access by a foreign power.
DEFENSE POLICY: The United Nations has argued that humans need to remain involved in the “identification and selection of targets and then the use of force against them, lethal or otherwise.” As weapons systems become more sophisticated, military leaders must decide whether to maintain this “human in the loop” approach or allow situations where autonomous weapons systems can fire based on specified criteria. Competitors and opponents, be they states of non-state actors, are unlikely to feel so constrained, and the U.S. military is likely to experience significant pressures from this space given the self-imposed restrictions it has placed on the development and deployment of these new systems.
ETHICAL SAFEGUARDS: AI Ethics: The ubiquity of AI applications raises a number of ethical concerns, such as questions of bias, fairness, safety, transparency, and accountability. People worry about the possibility of discriminatory behavior based on algorithms, a lack of fairness, limited safety for humans, an absence of transparency about how the software operates. And poor accountability for AI outcomes. Without systems that address these concerns, the worry is that AI will be biased, unfair, or lack transparency.
Government Surveillance: Government surveillance represents a growing threat in many countries where leaders have turned toward authoritarianism in recent years. In 2018, China had 350 million cameras in place, or an estimated one camera for every four peoples, which makes surveillance possible on an unprecedented scale. When combined with AI analysis that matches images with personal identities, the capacity for in-depth population control is enormous. China is expanding its use of social credit systems for daily life. It compiles data on people’s social media activities, dealing with other people, and paying taxes on time and then uses the resulting score to rate people for creditworthiness, travel, school enrollment, and government positions. These systems are highly problematic from an ethical standpoint.
Incorporate Ethics in Decision making: To protect human values, we recommend developing responsible AI principles based on fairness, transparency, and human safety; hiring ethicists, designing codes of ethics and developing AI ethics review boards; and having annotated software and AI audit trails, mandating AI training programs, and providing a means of remediation in cases of discernable consumer harm.
SUMMATION: The world is on the cusp of revolutionizing many sectors through artificial intelligence, machine learning, and data analytics. There already are significant deployments in finance, national security, healthcare, criminal justice, transportation, and energy management, and smart cities that have altered decision making, business models, and system performance. AI is being utilized in virtually every sector and transforming the way people communicate, buy goods and services, undertake transactions, and learn from one another.
Yet the manner in which AI systems unfold has major implications for society as a whole. It matters how policy issues are addressed, ethical conflicts are reconciled, and legal realities are resolved, and how much transparency is required in AI and data analytic solutions. Exactly how these processes are executed needs to become better defined, because they will have substantial impact on the general public. Whether individuals are helped or harmed by health information technology, personalized education, autonomous vehicles, e-commerce, and autonomous weapons depends on the policy, legal, and regulatory environment as well as corporate decisions, international competition, and public opinion.
If leaders make appropriate policy and operational decisions of the sort recommended in this book, we are quite optimistic about our AI future. In many areas, advanced technologies will improve medical care and education, help seniors and the disabled gain mobility, promote social and economic opportunity, and safeguard national defense. However, if leaders fail to make the right policy choices, the world could disintegrate into stark inequality, a lack of personal privacy, widespread unfairness, and political authoritarianism. Source: Turning Point: Policymaking in the Era of Artificial Intelligence by Darrell M. West and John R. Allen (2020)
The unabbreviated version of the above can be found in the pdf document below.