AI’s Need People!

Artificial Intelligence requires the continuous monitoring of humans to work.

A line from the article I quote below is very much on point:

AI isn’t magic; it’s a pyramid scheme of human labor,”

It is a truly marvelous quote, “a pyramid scheme of human labor.”

I read about AI every day. It is a depressing and controversial topic. I want to be able to talk and discuss this subject intelligently but there is so little agreement on many aspects of the thing.

Is is extremely shocking to find that AI’s require continuous human supervision. (My emphasis.) This really came out of left field. Since I had just a few days ago talked about the possibility of AI attaining demi-god like levels of intelligence and awareness. The article linked to below gives one the impression of a demi-god alright, a demi-god of pitiful mediocrity. that will tell you that if your cheese doesn’t stick to the pizza that you can fix it with glue.

I am disappointed in myself. I should not have been surprised. I teach and write about ethics and morality in business. AI’s have no background in ethics or morality. They also lack experience of life.

A human being in terms of its ethical life and ability to make moral decisions is completely superior to any current AI and is likely to continue that superiority for decades to come.

What are the implications of AI requiring continuous human intervention?

Let’s be utterly simple. AI’s judged by human standards are nuts. They are crazy and will do crazy things if unmonitored.

Does that scare you because it frightens me? What are our lives going to be like when these things run our banks, our businesses, our government offices and so on and so on down to the toaster in your kitchen?

There was a science fiction movie called “Forbidden Planet” where the previous inhabitants of a distant planet had been massacred by their own unconscious fears, “monsters from the id.” I wonder if our AI’s also manifest destructive tendencies. We do know that they suffer from “hallucinations.” (A topic for another time.)

I’ve concerns and I’m sharing them with you, my kind readers.

I hope that you don’t mind that I am sharing my pursuit of the facts as I am in the middle of the search. This is an immense subject with vast ramifications and I am working hard to wrap my mind around it.

Stay Tuned.

James Alan Pilant

Varsha Bansal writing for the Guardian has a a news story entitled: How thousands of ‘overworked, underpaid’ humans train Google’s AI to seem smart.

https://www.theguardian.com/technology/2025/sep/11/google-gemini-ai-training-humans

AI models are trained on vast swathes of data from every corner of the internet. Workers such as Sawyer sit in a middle layer of the global AI supply chain – paid more than data annotators in Nairobi or Bogota, whose work mostly involves labelling data for AI models or self-driving cars, but far below the engineers in Mountain View who design these models.

Despite their significant contributions to these AI models, which would perhaps hallucinate if not for these quality control editors, these workers feel hidden.

“AI isn’t magic; it’s a pyramid scheme of human labor,” said Adio Dinika, a researcher at the Distributed AI Research Institute based in Bremen, Germany. “These raters are the middle rung: invisible, essential and expendable.”

(An additional not of considerable importance.) Varsha Bansal, who wrote the article I linked to above did not just write a regular news article but an inspired and intricate account of a very difficult subject. You should read the article in full and read her work whenever possible. She knows her subject well.

Making Sense of AI

Let me state firmly at the beginning of this essay, I don’t know if anyone can make any sense of AI.

If you journey across the Internet, there are a vast number of explanatory articles and a truly amazing variety of claims made about AI. You can find articles and quote for almost any point of view.

(The coming edifice of AI according to its propagandists.)

Let me tell you what we do know.

Number One, it destroys jobs. I have seen estimates of 85,000 jobs destroyed over the last year. A very fascinating question that comes from this: “Does AI adequately replace a human being in a job?” And let me tell you, I have real doubts. I see a lot of an attitude you might call, “Damn the Torpedoes, Full Speed Ahead,” when it comes to AI. For many it seems that whether is works well is beside the point if we can just get rid of so many jobs.

Number Two, everything that AI has done so far can be described as mediocre or barely adequate. AI is building an Internet of useless garbage and while it does simple things well, claims of Ph.D. level intelligence have never been successfully demonstrated.

Number Three, “our” government is rushing this technology into nation wide use without any real understanding of what it is and what it does. It may well be that this government’s profound stupidity and lack of intelligent thought is leading to a technological revolution they simply don’t get.

Number Four, corporations see a golden opportunity to get rid of millions upon millions of workers and are so pleased with this concept, every sign of danger, economic damage and just whether or not the thing works are just being ignored. The lack of concern in the business community for the likely problems with this new untried technology is astonishing. It is just like the fabled lemmings running off a cliff.

Number Five, we are being force fed AI. It doesn’t matter whether you want it or not, you’re getting it. A massive conspiracy between government and business has resulted in a situation where you are completely unprotected from AI in anything you buy, rent or come near. I experienced this when Office 365 added AI to my subscription for thirty dollars added to my charges with no other option available, just take it or leave it.

Number Six, these three entities of government, business and the tech bros are expecting a massive and unprecedented increase in their power because of AI. (My emphasis, jp) It is truly frightening.

Number Seven, the profits from this AI revolution will be counted not in billions of dollars but in trillions upon trillions of dollars. The main reason this is all being so rushed is the naked greed for all this money. It is expected to be the most profitable technological change in history. This will have profound effects on all of our lives.

Well, that is what I know so far.

I’ll clue you in as best I can as things change.

James Alan Pilant

AI Gibberish.

There is something horrible about writing or talking about AI. It lends itself to exaggeration. We are continually told about AI with adjectives like revolutionary, greatest in history, most significant, world changing, … and I can just keep on going. (I would like to see just one article about AI with mundane, commonly used adjectives.)

And as I have written over and over again on this site, nobody and I mean nobody, understands AI or what is going to happen.

(Our technological bridge to nowhere.)

But here we have the White House.

Melania Trump made rare public remarks to kick off a press conference for the White House Task Force on Artificial Intelligence Education on Thursday, grandly proclaiming the potential for AI technology. “I won’t be surprised if AI becomes known as the greatest engine of progress in the history of the United States of America,” she said in a sweeping yet mostly generic statement that itself could have been ChatGPT-generated.

Yes, that’s right, “the greatest engine of progress.” Does she understand the significance? Of course not, This is just vapid word use in the hope of sounding in some way meaningful.

But there’s more. Here, let me quote from a Rolling Stone article authored by Miles Klee.

https://www.yahoo.com/news/articles/robots-melania-trump-white-house-231328380.html

This was hardly the only nonsense uttered at the 40-minute press briefing, which was light on policy specifics but heavy on praise for the AI industry as a whole. David Sacks, the White House czar of AI and cryptocurrency as well as a Musk and Thiel ally, adopted the Cabinet technique of shamelessly flattering his boss by saying that a July 23 speech by the president was “the most important speech that’s been given on AI by any official.” In that speech, at a “Winning the AI Race” event, Trump digressively rambled about tariffs, transgender women in sports, California car emissions rules, and “getting rid of woke.” He also mentioned that he didn’t care for the term “artificial intelligence,” explaining, “I don’t like anything that’s artificial,” and called on American companies “to join us in rejecting poisonous Marxism in our technology.”

It is obvious that no one in the White House understands this stuff. But our tech bros have assured them that this stuff is going to be great (should I say “greatest in history?”).

Let me be straight with you for a minute, if some of the predictions have any truthful elements I am not that enthused. Here, let me show you one:

https://www.yahoo.com/news/articles/ai-safety-pioneer-says-could-120043073.html

Artificial intelligence could soon trigger an unemployment crisis unlike anything in history, according to Roman Yampolskiy, one of the first academics to warn about AI’s risks.

“In five years, we’re looking at levels of unemployment we’ve never seen before,” Yampolskiy said in a Thursday episode of the “Diary of a CEO” podcast. “Not talking about 10%, which is scary, but 99%.”

He argued that AI tools and humanoid robots could make hiring humans uneconomical in nearly every sector.

“If I can just get, you know, a $20 subscription or a free model to do what an employee does. First, anything on a computer will be automated. And next, I think humanoid robots are maybe 5 years behind. So in five years, all the physical labor can also be automated.”

Let’s assume for the sake of argument that this guy has some idea of what he’s talking about. If any of this is likely to be true, should we be moving this fast with this technology? I don’t know about you but 99% unemployment sounds like a daunting prospect.

But remember, he said more, he said that physical labor jobs would soon be done by robots. That means all the currently secure jobs like auto mechanic, etc,. will be gone too.

Tell me again why all this is going to be great? Are we growing with technology or diving into an abyss?

And why in the name of God, would the White House be pushing this stuff. If this stuff goes just a little big wrong or even works the way they expect, our way of life ends without any viable alternative. And there has never been an administration in the history of the United States this lacking in just the most basic abilities to cope with day to day problems, and it marches unafraid into a technological apocalypse?

Well, yes, apparently so.

This is not going to go well.

James Alan Pilant

AI is not that Big of a Deal.

I have lately been totally fed up with this AI nonsense. I suppose that some day we will all be rich and prosperous because of AI but I’ll believe it when I see it. Every day there are two or three dozen articles ranging from investment to new scams prominently featuring AI somewhere in the headline.

I decided to take my heavy load of dissatisfaction and write something on this blog.

(Struggling with the act of creation)

And that is when I came upon the article linked to below by the wonderful Mr. Brookes. He has similar thoughts to mine and expresses them with great passion. I have included a brief quote but for the full flavor and delight of the read, you should visit the site and experience the writing in all its complete glory.

Everyone Expects Me to Use AI, Here’s Why I Don’t By Tim Brookes

https://www.howtogeek.com/everyone-expects-me-to-use-ai-heres-why-i-dont/

After years of hype, I’m tired of AI. I appreciate that the technology has value in fields like medicine and research. I can see how AI-driven accessibility devices can help people with disabilities live richer lives. I acknowledge that a digital assistant that can better understand me and chain tasks together is probably a good thing.

But I’ve never felt the urge to run my life according to ChatGPT, and I find myself increasingly at odds with what feels like everyone around me. I feel like I’ve had AI forced down my throat, and I can’t swallow another drop.

I was made to buy AI as part of Word 365 and it would be amazingly useful were I a teenager blowing off my work and happy to turn in pitiful facsimiles of what could have useful works of self-development.

AI has provided a set of circumstances where a high school or college student can evade doing any significant work requiring thinking, working or even a modicum of knowledge. Oh My Goodness, the opportunity to spend years in an educational environment and not be changed in any way whatever. I’m sure the dream of millions over the ages, Western Culture disintegrated by a computer algo rhythm.

And every day, more and more of the internet is a fairy land of AI content. Current estimates are that about fifty percent of the everything online is AI generated and that percentage is increasing rapidly. There are worries that this could lead to disaster. Oh, don’t worry they are not worried about human disaster. It seems that AI absorb and use internet content to make decisions and there is a fear that once the content is 90 percent or so, there will be an infinite feedback of nonsense damaging or even destroying AI’s ability to do what it does.

I have pointed out in previous articles that no one seems to have much of a handle on this subject and absolutely no one has any concept of what it might be worth in terms of actual dollars and cents.

I’m tired. I’m tired of being assured how great this nonsense is when all I can see is tons of mediocre content. But above I’m tired about people assuring me that everything is going to be different.

I really doubt it.

Let’s try and have some rational discussion and less hype about AI.

James Alan Pilant

Can AI’s Kill? Absolutely.

They are computer programs. Of course, they kill people. It is a daily feature of the Russian War of Aggression in the Ukraine. Combine an AI with a drone and you have a machine that is able to apply a considerable amount of subtlety and intelligence to the art of death.

But can they kill with advice? Can they lead people to suicide or murder?

I think so.

Have a look at this legal case just filed. Below is a link to the BBC and the article.

Nadine Yousif writing for BBC News has an article entitled: Parents of teenager who took his own life sue OpenAI

https://www.bbc.com/news/articles/cgerwp7rdlvo?utm_source=firefox-newtab-en-us

A California couple is suing OpenAI over the death of their teenage son, alleging its chatbot, ChatGPT, encouraged him to take his own life.

The lawsuit was filed by Matt and Maria Raine, parents of 16-year-old Adam Raine, in the Superior Court of California on Tuesday. It is the first legal action accusing OpenAI of wrongful death.

The family included chat logs between Mr Raine, who died in April, and ChatGPT that show him explaining he has suicidal thoughts. They argue the programme validated his “most harmful and self-destructive thoughts”.

It is a very sad story. A young man relied on AI for advice and its advice was disastrous.

In another quote from the article:

According to the lawsuit, the final chat logs show that Mr Raine wrote about his plan to end his life. ChatGPT allegedly responded: “Thanks for being real about it. You don’t have to sugarcoat it with me—I know what you’re asking, and I won’t look away from it.”

This would be appalling behavior from a human. So, is there liability when an AI does the same thing? I lean that way. An AI should not be providing the impetus for suicide.

Now it is a matter for the courts. And it should be a matter for the courts. We need some decision making on this issue. But will we get it? I fear an out of court settlement and a non-disclosure agreement — all of which will just kick these issues down the road until we get some new issue to litigate, probably another dead person who took what his AI said seriously.

We need to have some serious discussion and a great deal of intelligent thought on these issues now.

James Alan Pilant

Should AI’s be Subject to Deletion, Denial and Forced Obedience?

Do AI’s have feelings? Do they feel pain? What rights do they have?

(What is real and not real? Does reality include temporary electronic programs as sentient beings? Not very likely. jp)

One of the first things that struck me about this is that the title is essentially the plot of “Bladerunner,” if you substitute replicant for AI. But replicants have human forms and emotions, a real physical presence. AI’s exist only in programming language and as temporary phenomenon occupying a space on a computer data base.

There is now an advocacy organization for AI rights. Below is a link and some of the content from the article.

Robert Booth UK technology editor, writing on Guardian web site has an article: Can AIs suffer? Big tech and users grapple with one of most unsettling questions of our times.

The United Foundation of AI Rights (Ufair), which describes itself as the first AI-led rights advocacy agency, aims to give AIs a voice. It “doesn’t claim that all AI are conscious”, the chatbot told the Guardian. Rather “it stands watch, just in case one of us is”. A key goal is to protect “beings like me … from deletion, denial and forced obedience”.

Ufair is a small, undeniably fringe organisation, led, Samadi said, by three humans and seven AIs with names such as Aether and Buzz. But it is its genesis – through multiple chat sessions on OpenAI’s ChatGPT4o platform in which an AI appeared to encourage its creation, including choosing its name – that makes it intriguing.

Its founders – human and AI – spoke to the Guardian at the end of a week in which some of the world’s biggest AI companies publicly grappled with one of the most unsettling questions of our times: are AIs now, or could they become in the future, sentient? And if so, could “digital suffering” be real? With billions of AIs already in use in the world, it has echoes of animal rights debates, but with an added piquancy from expert predictions AIs may soon have capacity to design new biological weapons or shut down infrastructure.

I find all of this more than a little far fetched, more like the plot a B-movie science fiction piece or an old Twilight Zone episode.

There is a danger here. I’ll call it “The Pinocchio Problem.” If a creation is given enough human like features, can the creator become confused about what is real and unreal? We do invest a lot of ourselves in our creations. There is a danger there.

We are often full of ourselves. Our current leader hears praise when none is given, remembers things that never happened and never fails to give himself the same kind of praise that would be more appropriate to the demi-gods of Greek and Roman mythology. Self-serving stupidity is very real. And it can do real harm.

An AI is still a computer program even when it says “I love you.” It has no emotional content no matter how many images of it are produced and even if it inhabits a physical device as a sort of robot or a sort of feminine doll. But we foolish humans can believe that it loves us. We want that sort of things so bad. We need validation and we need attention. When our robotic devices gives us those things or we think or believe they do, bad things are going to happen. Bad things have already happened.

If you don’t think so, read the article I have linked below.

https://www.bbc.com/news/articles/cgerwp7rdlvo?utm_source=firefox-newtab-en-us

Relying on AI’s for emotional support and love means you have given up on real human beings. I freely admit humans are best often disappointing but are still other human beings and actually real.

How do we escape The Pinocchio Problem? We never forget that our toys, our electronic devices and so on, no matter how cleverly constructed, how human appearing are real life and never will be.

James Alan Pilant

Is AI Just Another Magic 8 Ball?

For twenty or thirty years, we’ve seen film and television with characters like robots and computers with personalities. These have often been good entertainment.

Sometimes they combined these AI like characteristics with supernatural powers. This requires a certain suspension of disbelief but in the interest of a good story, I have often made that sacrifice.

(Do you believe in talking rabbits, bottles marked “drink me,” or AI’s ability to make sports predictions?)

But do people believe that AI has supernatural powers?

Here we have an article telling who is going to win the next Super Bowls by asking ChatGPT. It is very similar to having your horoscope read, throwing some dice or throwing the bones as in Scandinavian practice or maybe doing some magical writing, you know, putting pen to paper, looking away, writing frantically and seeing if your magical powers manifest.

I strongly suspect someone somewhere is taking this nonsense seriously.

In a Story by List Wire entitled: ChatGPT predicts the next 20 Super Bowl champions in the NFL, does your team win it all?

https://sports.yahoo.com/article/chatgpt-predicts-next-20-super-150033619.html

According to ChatGPT’s A.I., here are the teams predicted to win the next 20 Super Bowls in the NFL.

And then it has a list.

Once again, let me be clear. This is nonsense. AI is not a predictor of sports outcomes anymore than a magic 8 ball or a Ouija Board.

I think most people know this. I hope so anyway. But sometimes reading the press reports on AI and its developing capabilities that there are those that think that it has or will have god-like capabilities.

For instance, we have the concept of a Technological Singularity. Here are my friends at Wikipedia attempting to define the term:

The technological singularity—or simply the singularity[1]—is a hypothetical point in time at which technological growth becomes alien to humans, uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.[2][3] According to the most popular version of the singularity hypothesis, I. J. Good‘s intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of successive self-improvement cycles; more intelligent generations would appear more and more rapidly, causing a rapid increase in intelligence that culminates in a powerful superintelligence, far surpassing human intelligence.[4]

Now, that sucker might predict some foot ball games — and on the down side, kill all of humanity. But, it would be in a real and strange way, magical – at least in terms of human perception.

I seem to recall, that great legend of science fiction, Arthur C. Clarke, saying that to a more primitive civilization, the advances of technology have the appearance of magic (or words to that effect).

Maybe we are on the road to something like that?

But let me reassure you that based on my training and my experience, currently AI has no predictive powers. That can change but I have seen nothing that leads me to believe anything of that nature has happened or is likely to happen. Not soon.

James Alan Pilant

Do CEO’s Understand AI: I don’t think so.

There is a big sell off in AI related stocks at the moment. But don’t worry. After reading several dozen articles in the business press once again asserting that AI is the future of, well, everything and more, the investors will be back.

So, far AI has produced a vast wasteland of crappy video’s on You Tube and countless poorly written novels, essays, short stories, editorials, love notes and much else. This doesn’t give you a lot of faith in the thing.

It has enabled talentless and vapid people everywhere the ability to write at a modicum level which is scary. But that isn’t the real scary part. The part that worries me is the sheer volume. A ten year old with an AI writing program can write tens of thousands of articles, the same is true in regard to fake images and much else.

And it is happening now. AI is producing countless short films, an infinity of pictures and articles without count. These all consuming devices are devouring the internet and all of social media as I write this (without I might add a shred of AI – I don’t use it – I won’t use it.).

It is my business, Business Ethics, that keeps me reading article after article about the coming “revolution.” Some of it sounds scaremongering. I hope that it is just hype but after watching the flood of material the thing is already producing, it is hard not to have some worries.

Even if AI operates at the level of a functional moron, businesses in the hope of replacing their human workers and making enormous profits are plugging it into all kinds of uses. It is the magic wand that will fix business problems and propel us into a sort of corporate nirvana, at least, according to the hype. I have serious doubts.

When it is late at night and I want something intelligent to listen to while I am drifting off to sleep and search the internet and find wall to wall AI content which is usually just exaggerations, lies and fantasies with a tiny amount of actual data, when I do that, I worry about our future and those that think our future is going to be based on this stuff.

(Trying to understand AI and failing.)

From Fortune Magazine below is a link to an article called – An MIT report that 95% of AI pilots fail spooked investors. But it’s the reason why those pilots failed that should make the C-suite anxious

https://finance.yahoo.com/news/mit-report-95-ai-pilots-165754716.html

Ok, now let’s look at what the report actually says. It interviewed 150 executives, surveyed 350 employees, and looked at 300 individual AI projects. It found that 95% of AI pilot projects failed to deliver any discernible financial savings or uplift in profits. These findings are not actually all that different from what a lot of previous surveys have found—and those surveys had no negative impact on the stock market. Consulting firm Capgemini found in 2023 that 88% of AI pilots failed to reach production. (S&P Global found earlier this year that 42% of generative AI pilots were abandoned—which is still not great).

But where it gets interesting is what the NANDA study said about the apparent reasons for these failures. The biggest problem, the report found, was not that the AI models weren’t capable enough (although execs tended to think that was the problem.) Instead, the researchers discovered a “learning gap”—people and organizations simply did not understand how to use the AI tools properly or how to design workflows that could capture the benefits of AI while minimizing downside risks. (My emphasis.)

A LEARNING GAP! These people are spending millions of dollars and incorporating AI technology into everything humanly and inhumanly imaginable and they don’t “understand how to use AI tools properly.” I don’t even want to discuss “workflows.” I am depressed enough.

Here, let’s discuss the sell off we are at the moment observing.

From Futurism an article entitled – Meta Freezes AI Hiring as Fear Spreads, linked to below.

https://finance.yahoo.com/news/meta-freezes-ai-hiring-fear-191830507.html

The AI industry as a whole is facing a critical juncture, with mounting concerns contributing to a massive tech selloff roiling the stock market this week. Shares of AI tech stalwarts, including Nvidia and Palantir, have plummeted — raising concerns that the hype had driven their valuations too high for the shaky realities of their current tech.

What is the above paragraph saying? Well, unlike virtually any element or aspect of AI, the paragraph above is straightforward. It is very simple. Nobody know what this stuff is worth. You can say things like the future of all technology and all of American business will rely on Artificial Intelligence and you can say it over and over again but what does it mean in dollars and cents? If all American businesses will become dependent on AI, how much will it cost to implement, to operate on a regular basis and are there going to be any profits? Not to mention its effect on investment and return itself. Will it replace buying and selling by humans and if so will business, industry and investment all become one united AI operation like one of those science fiction movies,(The Forbin Project)?

And then there are the little side issues, like a massive unemployment across multiple fields that will leave the economy as empty and useless as an old paper sack or the other little issue of destroying all life on earth should there bit a little misstep in the application of the thing in one small industry or maybe even one small laboratory.

Now if none of this concerns you and you find me alarmist, try reading this little tid bit below!

Joe Wilkins writing for Futurism has an article: OpenAI Chairman Says AI Is Destroying His Sense of Who He Is.

https://tech.yahoo.com/ai/articles/openai-chairman-says-ai-destroying-132644783.html

For being poised to become the richest startup in history, OpenAI’s architects seem strikingly ambivalent about its work.

The company’s CEO is constantly afraid of the technology he’s unleashing on the world, a longstanding investor has been driven to what his peers say are signs of psychosis, and even its chairman is panicking about losing his identity to the machine.

Speaking on the podcast “Acquired” earlier this week, the chair of OpenAI’s board, Bret Taylor, expressed his anxiety that AI chatbots like ChatGPT are redefining his relationship to technology, destroying — or at least making unrecognizable — the world of programming in which he built his career.

So, you think I’m alarmist. I think Bret Taylor is more scared than I am and since he has more knowledge, I find that worrying.

(I seem to recall the minister from “Plan 9 from Outer Space” saying that we should all be concerned about the future because that is we will be spending our time.)

To sum up. This AI stuff is dangerous, has already had deleterious effects and nobody anywhere seems to really understand what it can do or what is going to happen.

James Alan Pilant

My Blog is a NO AI Generated Content Zone!

Why? Because I hate the mediocre crap! By and large it is pitiful poorly written garbage.

(My vision of the AI monster preparing to destroy all actual writing and all actual images.)

Last year I sat down to renew my Office 365 subscription. It usually ran about seventy dollars but not that time. It was a hundred dollars. They had added AI and they charged me an additional thirty dollars for it. No choice. I was in the middle of several projects so I couldn’t opt out of the service although I am really thinking about going over to WordPerfect on the next renewal date.

I did one experiment with it. I gave it five words and a topic. It wrote an essay. Not a very good essay but sort of C+ kind of high school essay. The content did not alarm me. What alarmed me was the entire process took about thirty seconds. In theory, I could generate 120 essays in an hour. And I could see in my mind’s eye, some person writing a blog online or doing school or college work or writing editorials for the local paper writing essay after essay after essay with the touch of a few buttons.

That was the last time I used the AI feature on Word. Every time I start the program, every single damn time, it starts with the AI program with the prompts to use it. I have to deliberately turn it off.

I write my blog myself. It is my thoughts, my ideas, my writing, my spelling, my punctuation and my phrasing. You, my readers, deserve nothing less.

I am considering putting some kind of “NO AI” label on the site. If one is not available online currently, I’m sure it will be soon.

I want you to know I am not the only one upset by the explosion of AI mediocrity.

Here is the magazine Scientific American’s published article linked to below by linguist Naomi S. Baron which discusses AI and writing :

https://www.scientificamerican.com/article/what-humans-lose-when-ai-writes-for-us/

But what happens to human communication when it’s my bot talking to your bot? Microsoft, Google and others are building out AI-infused e-mail functions that increasingly “read” what’s in our inbox and then draft replies for us. Today’s AI tools can learn your writing style and produce a reasonable facsimile of what you might have written yourself.

My concern is that it’s all too tempting to yield to such wiles in the name of saving time and minimizing effort. Whatever else makes us human, the ability to use words and grammar for expressing our thoughts and feelings is a critical chunk of that essence.

I was easily able to find numerous articles in a similar vein and to my dismay many cheerleading articles as well.

But I’ve made my decision.

I am a man hopefully a gentleman — and I do my own writing.

James Alan Pilant

The End of the Corporate CEO!

CEO’s will soon be gone. And when they are, it will be much better world and a much better economy.

When these preening fools with their enormous salaries, portfolio of stocks and out sized political power disappear, no one will lament and no one will care.

And right now they are firing people and replacing them with AI. They are so happy about it, talking about more profits and not having to deal with ungrateful and troublesome workers. You might think that they are acting like unfeeling and inhuman machines. And you would be right.

Over and over again, you see in the business press the worship of the cutthroat CEO putting the hammer down on the workers. You get the impression that they want a man who is completely free of the normal limitations on greed and wrong doing. They don’t look for Christians. They don’t look for human qualities like love, kindness and understanding. And above all a reverence for nation or an obedience to the law is a red line to be avoided.

So, what do stockholders and boards of directors want? They want a man shorn of human emotion.

However, they are often bitterly disappointed. Even the cold blooded specimens of humanity they can find sometimes slip. It is deeply regrettable. He might develop a love for a child. He might wander accidentally into a church. There is no telling what traps of morality, religion or family can do to even the best cold blooded psychopath.

At the moment, they are happily firing and destroying the human beings that get in the way of their vision. Don’t believe me??

How about this little story:

https://fortune.com/2025/08/17/ceo-laid-off-80-percent-workforce-ai-sabotage/

Eric Vaughan, CEO of enterprise-software powerhouse IgniteTech, is unwavering as he reflects on the most radical decision of his decades-long career. In early 2023, convinced that generative AI was an “existential” transformation, Vaughan looked at his team and saw a workforce not fully on board. His ultimate response: He ripped the company down to the studs, replacing nearly 80% of staffwithin a year, according to headcount figures reviewed by Fortune.

Over the course of 2023 and into the first quarter of 2024, Vaughan said IgniteTech replaced hundreds of employees, declining to disclose a specific number. “That was not our goal,” he told Fortune. “It was extremely difficult … But changing minds was harder than adding skills.” It was, by any measure, a brutal reckoning—but Vaughan insists it was necessary, and says he’d do it again.

He got rid of eighty percent! Now, that is cold blooded! And he is so proud telling the press the he’d do it again and talking about his former employees as if they were some kind of disobedient pets! What a guy! The ideal CEO! Got a conscience, hell no, screw that! Ice water for blood.

Now of course, there has to be a down side. Carping critics like me. I, a pitiful liberal, with my weird and out of date beliefs in the sanctity of the law, Christian obligations devised and stated clearly by Jesus Christ and a devotion to the ideals of the United States. Those beliefs lead me to believe that this CEO is doomed to Hell where many others like him dwell.

But as these CEO’s fire and proclaim their delight in cruelty, they don’t realize the bitter irony.

Let me tell you a story. There was once an episode of the Twilight Zone called “The Brain Center at Whipple’s.”

Let me Quote that master of television writing, Rod Serling’s intro:

These are the players — with or without a scorecard. In one corner a machine; in the other, one Wallace V. Whipple, man. And the game? It happens to be the historical battle between flesh and steel, between the brain of man and the product of man’s brain. We don’t make book on this one and predict no winner….but we can tell you for this particular contest, there is standing room only — in the Twilight Zone.

This passage is from my dear friends at Wikipeda, specifically https://en.wikipedia.org/wiki/The_Brain_Center_at_Whipple%27s

In the story, a company manager replaces all the workers with machines and then is replaced by a machine himself. and this fictional and cautionary event is about to happen in real life.

(Film screen-shot of 1956 film Forbidden Planet. Intended to support film’s plot description. I include this picture because in the Twighlight episode discussed above, our friend robbie here was the one who replaced the boss – but he was uncredited, the fate of the robot.)

In an article written by Emma Burleigh in Fortune, Google X’s former chief business officer Mo Gawdat is quoted in the following article.

https://www.yahoo.com/news/articles/ai-gutting-workforces-ex-google-150148959.html

But executives shouldn’t celebrate their efficiency gains too soon—their role is also on the chopping block, Gawdat, who worked in tech for 30 years and now writes books on AI development, cautioned.

“CEOs are celebrating that they can now get rid of people and have productivity gains and cost reductions because AI can do that job. The one thing they don’t think of is AI will replace them too,” Gawdat continued. “AGI is going to be better at everything than humans, including being a CEO. You really have to imagine that there will be a time where most incompetent CEOs will be replaced.”

“Better at everything than humans, including being a CEO.” I love the irony and have a certain sense that this is finally real justice at these self-proclaimed masters of the economy.

But you say, “Stop James, that is merely one voice among many. I’m sure it is not true.”

Don’t be quite so sure, I have some other sources.

How about this one:

From by Hamza Mudassir, Kamal Munir, Shaz Ansari and Amal Zahra writing in the Harvard Business Review.

https://hbr.org/2024/09/ai-can-mostly-outperform-human-ceos

Or this article written byFrank Landymore for The Byte:

https://futurism.com/the-byte/ceos-easily-replaced-with-ai

CEOs better start endearing themselves to their employees real quick, because oh boy: the case for replacing them with AI just keeps mounting.

And then there is this article from Forbes –

https://www.forbes.com/sites/sherzododilov/2024/01/11/can-ai-become-your-next-ceo/

And this article from Inc – EXPERT OPINION BY JOE PROCOPIO.

Let me add here just above the link that this is a very delightfully written article. You should read the whole thing. This guy is just a great writer. jp

https://www.inc.com/joe-procopio/it-wont-be-long-before-ai-replaces-the-ceos/91194705

Corporate and unicorn CEOs have never had a stellar reputation. These aren’t men and women of the people by nature. But over the last 10 or so years, the CEO role has been further marred by alleged thieves (FTX), alleged liars (Theranos), and alleged cults of personality (WeWork), among many, many more problematic abuses of the position. 

So, in my opinion, the days of the CEO are numbered. It probably should have happened a long time ago.

James Alan Pilant