5 Ways AI Doesn’t Like AI

If we choose to believe statistics, more than half of the people reading this suffer from a condition called FOMO, which stands for the Fear of Missing Out.  You have a 56 percent chance of being one of them.

 

If you cannot start or end a day without checking email, social media and texts, you very well could be self-inflicting FOMO without realizing you’re holding the gun in your hand.

 

Social Media Today reports that a study conducted by a British psychologist reveals 56 percent of adults who are heavily involved in using social media apps are FOMO sufferers.  Andrew Przybylski’s research indicates FOMO is an addiction – an addiction to social media.  His work shows people get anxious if they haven’t seen the latest lame meme or read the latest fake news. 

 

Currently, there is no 12-step program to cure FOMO addicts.  In fact, quite the contrary.  Click the link above and you’ll discover the staff of at least one social media newsletter encourages enablers.

 

So, this addiction isn’t totally the fault of the FOMOholic.  If I wanted to start a conspiracy theory, I’d blame capitalists and lazy people.  But that’s just me.  A capitalistic society does seem to insist on offering more online value.  Joining private Facebook groups to interact is highly recommended.

 

I could blame Elon Musk, but he takes responsibility for creating AI and what can happen if information gets into the wrong hands.  If I wanted to start a conspiracy theory, I’d contend Elon knows something we don’t because he is sure in a hurry to get us to colonize Mars. 

 

From my perspective in Left Field, the real problem is people are sheep in human form.  They need to be herded, and they like easy.  They forget they were created with a mind to think for themselves. 

 

I get it.  Technology is designed to make life easier.  One of the latest technology gimmicks is called ChatGPT.  It’s an app that writes for writers.  It is getting a lot of buzz.

 

I personally compare it to a mediocre actor speaking with a bad Texas accent.  Lack of authenticity is insulting. 

 

People in a writer’s group I joined discussed ChatGPT in our private Facebook group.  Most of us thought ChatGPT needs to hire a talented human ghostwriter.  Otherwise, we found the writing was too stilted to keep insomniacs awake.  But the writers had recently completed a master course on elements required for a book to be a bestseller.  Our goal is to raise the bar for writers not for writers to find more time to congregate in the bar.

 

Personally, I question legal and moral reasons for using such software for anything other than a resource tool. 

 

Matt G. Southern with the Search Engine Journal wrote that a Google webspam team can act against people who publish content written by ChatGPT.  That copy is considered spam since it wasn’t generated by a human.  He didn’t indicate what the consequences might involve.  

 

Another writer, Lisa Buyer, contributed an article to the journal to “reverse FOMO and experience JOMO.”  That stands for the Joy of Missing Out.

 

I personally contend BOTS are so good at reproducing themselves, that they can make themselves attorneys and sue if we plagiarize.  Still, I’ll take my chances for you.

 

I consulted ChatGPT to try it myself before writing about it.  It sounds like a great tool to provide information.  But using it as free labor to write a book or movie script or whatever might not be such a great idea. 

 

So I specifically asked for five reasons ChatGPT’s AI lists for avoiding the use of this form of AI.  Here’s what it took mere seconds to generate:

 

1.      “Lack of accountability:  There is often a lack of transparency and accountability when it comes to AI, making it difficult to determine who is responsible if something goes wrong or if the AI makes a mistake.”

 

Personally, I believe this explains why human resources executives aren’t very human.

 

2.      “Bias:  AI can perpetuate and amplify biases that are present in the data it is trained on, leading to unfair and biased outcomes.”

 

Quite frankly, I’m not surprised.  AI is supposed to reflect society.  We’re programmed to judge people, and most of us unknowingly do it within seconds.  If you’re honest with yourself, you might have judged someone a time or two.

 

3.      “Security concerns:  AI can be vulnerable to hacking and other security breaches, which can compromise sensitive information and cause harm.”

 

It’s amazing to think that AI can become so human it becomes greedy.  There are enough opportunists spending hours on the internet buying and selling your information – without permission – so you can get emails to buy space heaters, solar paneling and wrinkle cream.

 

4.      “Ethical concerns:  There are ethical concerns surrounding the use of AI, such as the potential for misuse or abuse of the technology.”

 

I should have asked how ChatGPT’s AI defines ethical.  If it is truly programmed to think like I do, the answer would be Congress.

 

5.      “Hype:  The hype surrounding AI can sometimes be disproportional to its actual capabilities, leading to unrealistic expectations and disappointment when it fails to deliver on its promises.”

 

I heard a rumor that the company that owns ChatGPT may start charging people like us to use the service to write for us.  I’m not opposed to that because it would be a start to taking the free out of freelancer.

Previous
Previous

5 Tips to Avoid Being Afraid of Your Fear

Next
Next

10 silly jokes to make you laugh this month