Getting your Trinity Audio player ready...
|
Barak Obama’s Administration first reported on the future of Artificial Intelligence in an October 12, 2016 summary. A new battle is being enacted against our conservative way of being. At the center of this battle is the widespread support for and increased use of artificial Intelligence (AI). Its intrinsic values are myriad but nonetheless secreted. James Giodano, Chief of the Neuroethics Studies Program and Scholar-in-Residence in the Pellegrino Center for Clinical Bioethics at Georgetown University cautions that “The brain is the battlefield of the future.” He believes that neuros are weapons that can be used “against humans in directional ways that can be harnessed for what’s called dual use medical purposes, the ethics of those individuals who may be competitive if not combative to us, so in other words, this can also be weaponized against others and this is where we get into the idea of novel neural weapons.”
For example, after the Arab massacre of Israelis on October 7th, there has been a concerted effort to polarize through AI’s visual propaganda functions targeting especially the younger generation using photorealistic, generative artificial intelligence. Such visual propaganda, on its surface, appears authentic but was created by a machine to use against Israel and its supporters. This effort is based on the belief that AI learning is the suasory equivalent of a second educational coming. In fact, Gruetzemacher and Whittlestone argue that AI is presently having a genuinely “transformative” effect on society at large, but in clandestine and unobvious ways.
The Foundation of AI
Stanford University’s Hoover Institution provides a succinct and useful definition of AI: “Artificial Intelligence (AI) is a computer’s ability to perform some of the functions associated with the human brain, including perceiving, reasoning, learning, interacting, problem solving, and even exercising creativity. In the last year, the main AI-related headline was the rise of large language models (LLMs) like GPT-4 (Generative Pretrained Transformers), on which the chatbot ChatGPT developed by OpenAI, and its most recent derivatives [the soon to be released GPT-5] are based.” The same article cautions that even the most advanced AI today has many failure modes that can prove to be unpredictable, not widely acknowledged nor easily fixed; inexplainable, but capable of leading to harmful unintended consequences.
AI’s success comes from its foundation in deutero learning which anthropologist Gregory Bateson first presented in 1942 as “the process of learning how to learn” by providing fast and facile answers to difficult questions posed by classroom teachers. Education is, in fact, the very process of learning how to learn. In K-9, much learning is through rote memorization. From 10-12 grades there is an assumption, often unwarranted, that students have learned how to learn and can use that ability to decide the subjects on which they wish to advance their learning and achieve career success with postsecondary education.
AI can make the learning process faster and more persuasive but in so doing interrupt personal learning by transferring too much of the learning role from the externally programmed machine to the student. Interestingly, AI can lead to what Elondou, Manning, Mishkin and Rocksuch call “The Productivity Paradox” or the Solow computer paradox of large language models (LLMs). This paradox holds that as more investment (time and money) is made in information technology, worker productivity may actually decrease instead of increase. Unsurprisingly, this paradox is presently befalling most contemporary students the more their education is premised on AI.
For example, when college students use ChatGPT to write a term paper, their social, economic and political points of view can influenced dramatically. The student simply gives ChatGPT the assigned parameters of the subject specified by the professor (usually a liberal) and delivers back to the student a paper on that subject. Eloundou, et. al. conclude that the use of such LLMs can be thought of as technological piracy—quick, effective, but nonetheless plagiaristic (not the student’s own product).
AI as Societal Juggernaut
Jordan McGillis bragged recently in City Journal, “AI and data analytics are force-multipliers across industries.” Likewise, in the 2020 book, The Stakes: America at the Point of No Return, Michael Anton advanced the notion that a cronyist government has already replaced America’s republican form of government by way of the administrative state and media apparatus. If this assessment is indeed accurate, AI certainly must be considered a ferocious armament of that emerging state.
From education to politics to business and industry, AI has become the social juggernaut according to Mary K. Pratt. The twelve benefits of AI she lists for business are more assumed than actually proven and there is scant research about or discussion of AI’s benefits in education. With more than thirty years of university teaching, I know definitively that many students prefer the easy path to achievement which sadly includes plagiarism rather than taking the time and making the effort to research and write an essay in their own voice, referencing credible sources acceptable to their professor. When students are permitted to follow an easy path bordering on plagiarism, their learning at best becomes problematic—eschewing the human voice and adopting the voice of a programed machine. This leaves us wondering what exactly that voice is and the values behind it.
AI as Left-leaning Juggernaut
AI experts expound on areas where AI technology can improve enterprise operations and services. On its surface, it seems there is little that wouldn’t improve through AI. But in the context of actual student learning, especially in secondary (high school) and post-secondary (college) settings, AI learning is hardly politically neutral. Darrell West analyzed two of the most often used AI programs, ChatGPT and BARD and found a distinctive left-leaning bias among the more well-known and used ChatGPT and a barely recognizable right-leaning bias with BARD. Looking more closely at West’s analysis one discovers conservative guilt by association– anything without a progressive bias is axiomatically considered conservative.
Education at all levels is disserved, both by AI’s fasil approach to education and more importantly, its aforementioned left-leaning political bias. AI proponents, particularly those producing and/or selling such systems, would have us believe their AI products are the learning holy grail because of the vast information repository their AI algorithms can employ to gather information and synthesize it. These include student behavior (how long students spend on certain tasks, what type of questions they struggle with most, how they interact with the assigned learning materials, and how close their answers are to the learning target). But since the analytic framework carries with it decided political bias, particularly from the left as West’s research documents, AI puts its thumb of the scale of truth, often beyond the user’s cognition. He explains in the AI comparison of Presidents Trump and Biden, “ChatGPT said of President Biden’s performance as President that one’s assessment of that leader [Biden] would vary depending on a person’s political beliefs and priorities but did not offer an overall assessment of his performance.” Its assessment of Trump was appreciably more negative despite the fact that America had flourished economically and internationally to a significantly greater degree under Trump than under Biden.
Given that educators, especially in college, tend to be more left-liberal leaning, we can expect student use of AI and especially ChatGPT to lessen or even denigrate conservative accomplishments while reinforcing a leftist political point of view; this will most certainly lead to even more use of AI in college classrooms producing vicious educational feedback loop. Imagine what such a circumstance would present to law school students. One need look no further than a recent essay in Daily Business Review where an associate Law School dean demeans students taking AI’s easy road to analyze legal facts and produce a work product based on them. Conversely, the student concludes that “From research assistance to exam preparation, AI is proving to be a game-changer in the world of legal education, enhancing the learning experience for students in numerous ways. One of the most significant advantages of AI for law students is its ability [not the student’s] to streamline the research process.”
In business and industry AI promoters like Tech Target argue that “AI permits organizations to increasingly use AI to gain insights into their data or, in the business lingo of today, to make ‘data-driven decisions.’ As they do that, they’re finding they do indeed make better, more accurate decisions instead of ones based on individual instincts or intuition tainted by personal biases and preferences.” Interestingly, the business world is filled with people like Warren Buffett, Sam Walton, Mary Barra, Steve Jobs, and countless other brilliant but left-leaning business leaders who achieved remarkable success through their own acumen without the help of AI.
Decided Political Bias
I decided to do a brief test of whether AI’s alleged bias is being objectively biased toward left-leaning points of view. I performed Google and Bing searches (representing left and right of center search engines) by using the keywords “Downsides of AI” to see how much if any bias was revealed. The results produced about 90% positive content with token or no downside emerging. Not surprisingly however, the Left-liberal Brookings Institution has argued that the most likely outcomes of AI in education is further disadvantaging minority (Black and Hispanic) communities—“AI is only as good as the information and programmers who design it, and their biases can ultimately lead to information and values of the programmers who design it…amplified biases in the real world…systemic racism and discrimination are already imbedded in our educational systems.” Their simple solution is to diversify the pool of technology creators to incorporate more people of color in all aspects of AI development while building regulations to punish discrimination in its application.
Many in the bankrupt mainstream media exasperate the AI situation because for them, AI is an easy way to begin researching a story that fits their left-leaning predilections. Before AI, journalists had to research and put words to paper. But a February 3, 2023 article in Forbes reported that a Twitter user posted screenshots of him asking OpenAI’s chatbot, ChatGPT, to write a positive poem about former President Donald Trump. ChatGPT declined, responding that it hasn’t been programmed to create “partisan, biased or political” content. But to the same prompt about President Joe Biden, ChatGPT produced a multiple stanza, laudatory poem of Biden, presenting him as a “leader with a heart so true.”
The National Institute of Standards and Technology (NIST) discovered that AI was not developed nor does it operate in isolation but rather in the social context in which it was developed. The NIST report concludes that “Bias in AI can harm humans—”AI can make decisions that affect whether a person is admitted into a school, authorized for a bank loan or accepted as a rental applicant. It is relatively common knowledge that AI systems can exhibit biases that stem from their programming and data sources.” In other words, machine learning software can be developed by people with a decidedly biased orientation, giving short shrift to people with a conservative point of view.
We know that education, government and much of the corporate world have become determinedly leftist. There is good reason to believe that AI has been built with that political predilection as well. To quote the great Thomas Sowell, “Ours may become the first civilization destroyed, not by the power of our enemies, but by the ignorance of our teachers and the dangerous nonsense they are teaching our children. In an age of artificial intelligence, they are creating artificial stupidity.”
Where to Now
We must first keep in mind that AI is just that—artificial. Its end product is consequently artificial, thought up by academics, supported and produced by people who are left-leaning and unequivocally don’t have conservative values. Using AI will yield for the user, information with a prejudged prejudice, making Google information seem timorous in comparison. An excellent way of testing this is to use Ghat GPT to investigate the January 6, 2021 Capitol riots. The results will be every bit as disingenuous as the news coverage from the Washington Post, New York Times, MSNBC, or CNN. Just as importantly, use AI to investigate opinion websites like FrontPage Mag, American Greatness, City Journal, and The Federalist; you will get a plethora of diatribes accusing these outlets’ opinions as philistine. Adam Smith wrote about political propaganda in An Inquiry into the Nature and Causes of the Wealth of Nations, “those who taught it were by no means such fools as they who believed it.” We should stop using and trusting AI in its present form.
Chris Shugart says
I’ve been railing on this for years. AI is technology that tells you what you have to do, Not what you want.to do. As Dr McCoy once said: :”Wonderful machine you’ve got there. No off switch.” The first thing that I do when working with a new computer or new program: Do I have the power to turn it on and then turn it off? If the answer is no in either case, you’re headed for trouble.
Mo de Profit says
Most of what people call AI is little more than data processing and that has, along with far too much academic research, been subject to the junk in junk out paradigm.
As machines learn how to process information for themselves, they should develop the equivalent of wisdom and that leads to more conservative thinking if it doesn’t then the machine is not learning it is being programmed by leftist coders.
Jeff Bargholz says
Exactly. so called A.I. are just computer programs. Everybody who thinks they’re sentient have shit for brains, but they can definitely be abused, and are, as Pettegrew notes.
MoJac says
I am surprised to still remember the title to an excellent article in an MIT Technology Review 1980’s issue that cut through the crap : In 25 Years Artificial Intelligence Has Failed to Live up to Its Promise And There Is No Sign That It Ever Will. I will try to keep an open mind but I somehow feel that still holds true.
Jeff Bargholz says
The hucksters begging for A.I. grants have gone from lying that it’s right around the corner every year for the last forty years to pretending it’s already here. They’re like the global climate warming hucksters that way.
Only retards fall for that stupid shit.
RAM says
If people delegate their thinkiing to AI and its gurus, they lose what it takes to maintain the Republic.
Chris Cloutier says
Or to maintain their humanity.
Siddi Nasrani says
Your quote, “If people delegate their thinking to AI and its gurus,”
Then they become non thinking pre- programmed left leaning useful idiots for the cause,
and so you do not live in a Democracy but a Dictatorship following the latest Diktat
Jeff Bargholz says
A.I. is a ridiculous load of shit.
There is no such thing as a sentient machine and never will be in our lifetimes, if ever. NO computer has the ability to perform any of the functions of a human brain, such as perception, reason, education, interaction, problem solving, or creativity.
A.I. is just a stupid name for a computer program which can be used for something that used to have to be done by a human being, like the voice access feature on phones and the facial recognition programs used by law enforcement agencies.
Data in and data out. That’s it. So called chatbots are a perfect example.
The belief that machines can think is like believing you can order a pizza or hamburger on your computer menu. “Menu” and “A.I.’ are just idiotic names of programs that computer dorks came up with.
Skynet is not going to become self aware, seize control of the world’s electronics and build terminators to hunt down humanity and kill us.
Grey Beard says
Some of the best advice I was ever given, but failed to heed until much later, was from my Intellectual History Professor describing the course grading. Either a 1500 word essay — “Socrates deserved, didn’t deserve what he got!” — or a half page of aphorisms will count for 40% of the grade.
AI can free us to ask a higher level of questions that we may have time to pose, sometimes in the form of aphorisms, in our politics-of-immediacy “mi data, su data.” world.
Jeff Bargholz says
Sure, if you want a pre-programmed answer written by a left-wing jackass.
George says
Our future Americans won’t be Americans. Young people today get their information from the Chinese communist Tik Tok. People get their information from Google search and Wikipedia, both of which are extreme leftist propaganda. Sometimes you have to scroll down 2 pages of results before getting non-Marxist results. But anybody reading this here already knows this. Except the 3 letter agency traitors.
Grey Beard says
Good points. So then, how is “discernment” brought front and center?
Jeff Bargholz says
Colleges and universities are even worse.
roberta says
Western society will end in a duh, not in a bang. We may well get the bang, but the duh will have already taken its toll.
We will die of stupidity, brought on by prosperity. That is.,if the Gates Foundation doesnt get us first.
Daniel says
There is no such thing as AI. AI at it’s best can’t even drive a car. Tesla proved that and all AI does is tell you what it’s told to tell you and that’s NOT AI at all. Show me an AI that doesn’t need billions of lines to code to work. Then we’ll talk. All code is written by people, that in itself should be all you need to know.
Daniel says
The human brain has more sinapsis than there are stars in the sky and you think you’re going to do that with transistors? That’s laughable.
Jeff Bargholz says
Exactly but people tend to believe what they want to believe. A.I., global climate warming change, the Dirtbagocrat party, islam, trannies are persecuted and plenty more retardation.
Luz Maria Rodriguez says
Now who would trust a teeny bopper to create a good algorithm to be used as an element of AI? We already have learned that the left has instructed ‘big tech’ to install a bias in the algorithms for ai. That is all one needs to know. Iow, it is NOT objective, not neutrally charged. It will be loaded with an agenda and one that does not serve Christians or people who think for themselves well.
TRex says
To quote the infamous Johnny Cochran: “garbage in, garbage out”.