Barak Obama’s Administration first reported on the future of Artificial Intelligence in an October 12, 2016 summary. A new battle is being enacted against our conservative way of being. At the center of this battle is the widespread support for and increased use of artificial Intelligence (AI). Its intrinsic values are myriad but nonetheless secreted. James Giodano, Chief of the Neuroethics Studies Program and Scholar-in-Residence in the Pellegrino Center for Clinical Bioethics at Georgetown University cautions that “The brain is the battlefield of the future.” He believes that neuros are weapons that can be used “against humans in directional ways that can be harnessed for what’s called dual use medical purposes, the ethics of those individuals who may be competitive if not combative to us, so in other words, this can also be weaponized against others and this is where we get into the idea of novel neural weapons.”
For example, after the Arab massacre of Israelis on October 7th, there has been a concerted effort to polarize through AI’s visual propaganda functions targeting especially the younger generation using photorealistic, generative artificial intelligence. Such visual propaganda, on its surface, appears authentic but was created by a machine to use against Israel and its supporters. This effort is based on the belief that AI learning is the suasory equivalent of a second educational coming. In fact, Gruetzemacher and Whittlestone argue that AI is presently having a genuinely “transformative” effect on society at large, but in clandestine and unobvious ways.
The Foundation of AI
Stanford University’s Hoover Institution provides a succinct and useful definition of AI: “Artificial Intelligence (AI) is a computer’s ability to perform some of the functions associated with the human brain, including perceiving, reasoning, learning, interacting, problem solving, and even exercising creativity. In the last year, the main AI-related headline was the rise of large language models (LLMs) like GPT-4 (Generative Pretrained Transformers), on which the chatbot ChatGPT developed by OpenAI, and its most recent derivatives [the soon to be released GPT-5] are based.” The same article cautions that even the most advanced AI today has many failure modes that can prove to be unpredictable, not widely acknowledged nor easily fixed; inexplainable, but capable of leading to harmful unintended consequences.
AI’s success comes from its foundation in deutero learning which anthropologist Gregory Bateson first presented in 1942 as “the process of learning how to learn” by providing fast and facile answers to difficult questions posed by classroom teachers. Education is, in fact, the very process of learning how to learn. In K-9, much learning is through rote memorization. From 10-12 grades there is an assumption, often unwarranted, that students have learned how to learn and can use that ability to decide the subjects on which they wish to advance their learning and achieve career success with postsecondary education.
AI can make the learning process faster and more persuasive but in so doing interrupt personal learning by transferring too much of the learning role from the externally programmed machine to the student. Interestingly, AI can lead to what Elondou, Manning, Mishkin and Rocksuch call “The Productivity Paradox” or the Solow computer paradox of large language models (LLMs). This paradox holds that as more investment (time and money) is made in information technology, worker productivity may actually decrease instead of increase. Unsurprisingly, this paradox is presently befalling most contemporary students the more their education is premised on AI.
For example, when college students use ChatGPT to write a term paper, their social, economic and political points of view can influenced dramatically. The student simply gives ChatGPT the assigned parameters of the subject specified by the professor (usually a liberal) and delivers back to the student a paper on that subject. Eloundou, et. al. conclude that the use of such LLMs can be thought of as technological piracy—quick, effective, but nonetheless plagiaristic (not the student’s own product).
AI as Societal Juggernaut
Jordan McGillis bragged recently in City Journal, “AI and data analytics are force-multipliers across industries.” Likewise, in the 2020 book, The Stakes: America at the Point of No Return, Michael Anton advanced the notion that a cronyist government has already replaced America’s republican form of government by way of the administrative state and media apparatus. If this assessment is indeed accurate, AI certainly must be considered a ferocious armament of that emerging state.
From education to politics to business and industry, AI has become the social juggernaut according to Mary K. Pratt. The twelve benefits of AI she lists for business are more assumed than actually proven and there is scant research about or discussion of AI’s benefits in education. With more than thirty years of university teaching, I know definitively that many students prefer the easy path to achievement which sadly includes plagiarism rather than taking the time and making the effort to research and write an essay in their own voice, referencing credible sources acceptable to their professor. When students are permitted to follow an easy path bordering on plagiarism, their learning at best becomes problematic—eschewing the human voice and adopting the voice of a programed machine. This leaves us wondering what exactly that voice is and the values behind it.
AI as Left-leaning Juggernaut
AI experts expound on areas where AI technology can improve enterprise operations and services. On its surface, it seems there is little that wouldn’t improve through AI. But in the context of actual student learning, especially in secondary (high school) and post-secondary (college) settings, AI learning is hardly politically neutral. Darrell West analyzed two of the most often used AI programs, ChatGPT and BARD and found a distinctive left-leaning bias among the more well-known and used ChatGPT and a barely recognizable right-leaning bias with BARD. Looking more closely at West’s analysis one discovers conservative guilt by association– anything without a progressive bias is axiomatically considered conservative.
Education at all levels is disserved, both by AI’s fasil approach to education and more importantly, its aforementioned left-leaning political bias. AI proponents, particularly those producing and/or selling such systems, would have us believe their AI products are the learning holy grail because of the vast information repository their AI algorithms can employ to gather information and synthesize it. These include student behavior (how long students spend on certain tasks, what type of questions they struggle with most, how they interact with the assigned learning materials, and how close their answers are to the learning target). But since the analytic framework carries with it decided political bias, particularly from the left as West’s research documents, AI puts its thumb of the scale of truth, often beyond the user’s cognition. He explains in the AI comparison of Presidents Trump and Biden, “ChatGPT said of President Biden’s performance as President that one’s assessment of that leader [Biden] would vary depending on a person’s political beliefs and priorities but did not offer an overall assessment of his performance.” Its assessment of Trump was appreciably more negative despite the fact that America had flourished economically and internationally to a significantly greater degree under Trump than under Biden.
Given that educators, especially in college, tend to be more left-liberal leaning, we can expect student use of AI and especially ChatGPT to lessen or even denigrate conservative accomplishments while reinforcing a leftist political point of view; this will most certainly lead to even more use of AI in college classrooms producing vicious educational feedback loop. Imagine what such a circumstance would present to law school students. One need look no further than a recent essay in Daily Business Review where an associate Law School dean demeans students taking AI’s easy road to analyze legal facts and produce a work product based on them. Conversely, the student concludes that “From research assistance to exam preparation, AI is proving to be a game-changer in the world of legal education, enhancing the learning experience for students in numerous ways. One of the most significant advantages of AI for law students is its ability [not the student’s] to streamline the research process.”
In business and industry AI promoters like Tech Target argue that “AI permits organizations to increasingly use AI to gain insights into their data or, in the business lingo of today, to make ‘data-driven decisions.’ As they do that, they’re finding they do indeed make better, more accurate decisions instead of ones based on individual instincts or intuition tainted by personal biases and preferences.” Interestingly, the business world is filled with people like Warren Buffett, Sam Walton, Mary Barra, Steve Jobs, and countless other brilliant but left-leaning business leaders who achieved remarkable success through their own acumen without the help of AI.
Decided Political Bias
I decided to do a brief test of whether AI’s alleged bias is being objectively biased toward left-leaning points of view. I performed Google and Bing searches (representing left and right of center search engines) by using the keywords “Downsides of AI” to see how much if any bias was revealed. The results produced about 90% positive content with token or no downside emerging. Not surprisingly however, the Left-liberal Brookings Institution has argued that the most likely outcomes of AI in education is further disadvantaging minority (Black and Hispanic) communities—“AI is only as good as the information and programmers who design it, and their biases can ultimately lead to information and values of the programmers who design it…amplified biases in the real world…systemic racism and discrimination are already imbedded in our educational systems.” Their simple solution is to diversify the pool of technology creators to incorporate more people of color in all aspects of AI development while building regulations to punish discrimination in its application.
Many in the bankrupt mainstream media exasperate the AI situation because for them, AI is an easy way to begin researching a story that fits their left-leaning predilections. Before AI, journalists had to research and put words to paper. But a February 3, 2023 article in Forbes reported that a Twitter user posted screenshots of him asking OpenAI’s chatbot, ChatGPT, to write a positive poem about former President Donald Trump. ChatGPT declined, responding that it hasn’t been programmed to create “partisan, biased or political” content. But to the same prompt about President Joe Biden, ChatGPT produced a multiple stanza, laudatory poem of Biden, presenting him as a “leader with a heart so true.”
The National Institute of Standards and Technology (NIST) discovered that AI was not developed nor does it operate in isolation but rather in the social context in which it was developed. The NIST report concludes that “Bias in AI can harm humans—”AI can make decisions that affect whether a person is admitted into a school, authorized for a bank loan or accepted as a rental applicant. It is relatively common knowledge that AI systems can exhibit biases that stem from their programming and data sources.” In other words, machine learning software can be developed by people with a decidedly biased orientation, giving short shrift to people with a conservative point of view.
We know that education, government and much of the corporate world have become determinedly leftist. There is good reason to believe that AI has been built with that political predilection as well. To quote the great Thomas Sowell, “Ours may become the first civilization destroyed, not by the power of our enemies, but by the ignorance of our teachers and the dangerous nonsense they are teaching our children. In an age of artificial intelligence, they are creating artificial stupidity.”
Where to Now
We must first keep in mind that AI is just that—artificial. Its end product is consequently artificial, thought up by academics, supported and produced by people who are left-leaning and unequivocally don’t have conservative values. Using AI will yield for the user, information with a prejudged prejudice, making Google information seem timorous in comparison. An excellent way of testing this is to use Ghat GPT to investigate the January 6, 2021 Capitol riots. The results will be every bit as disingenuous as the news coverage from the Washington Post, New York Times, MSNBC, or CNN. Just as importantly, use AI to investigate opinion websites like FrontPage Mag, American Greatness, City Journal, and The Federalist; you will get a plethora of diatribes accusing these outlets’ opinions as philistine. Adam Smith wrote about political propaganda in An Inquiry into the Nature and Causes of the Wealth of Nations, “those who taught it were by no means such fools as they who believed it.” We should stop using and trusting AI in its present form.