Time to Face Facts

It would be difficult to have a serious or substantive conversation about Artificial Intelligence without touching upon some basic human characteristics or even flaws, which the various AI companies are looking to exploit.  It is time that we take a serious look at who and what we really are as humans, for ourselves, to better understand how or why various AI companies will look to exploit us.

Read More

AI Market Psychology

There is a lot of promotion , if not propaganda going on about something called Artificial General Intelligence or AGI. It refers to  a form of artificial intelligence that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a level “comparable” to that of a human being. The AI companies have characterized it a few ways, such as: Generalization: The ability to apply knowledge and skills learned in one context to different and varied problems. Reasoning: The capability to think logically, draw conclusions, and make decisions based on incomplete or ambiguous information. Learning: The ability to learn from experience, improving performance over time without explicit reprogramming. Adaptability:  The skill to adjust to new environments or challenges without human intervention. Understanding: A deep comprehension of complex concepts and the ability to engage in abstract thought.  Unlike narrow AI, which is designed for specific tasks (such as image recognition, language translation, or playing games), AGI would be able to perform any intellectual task that a human can do, including reasoning, problem-solving, learning from experience, and adapting to new situations.  At least that is the theory. AGI remains theoretical and is a topic of ongoing research and debate within the fields of AI, cognitive science, and philosophy. The development of AGI raises various ethical, societal, and safety concerns, especially regarding its potential impact on employment, privacy, and decision-making processes. When you read about these company’s “research” know that its corporate research, and its heavily promoted but for one reason, as most things in AI are:  money. Oh yes, I am sure they do the work or try, but the tangible results are something else. These companies like to promote future events and future capabilities, for sure, but the reality is that growth and capabilities come slowly, and the stupid monkeys are not so easy to duplicate. Not. (Do I dare talk about God at this point?) So, if or when you think about investing in anything having to do with AI, you may want to give pause. Yes, software and chip companies, at this point, seen to be a safer bet. Those companies can deploy their wares in a multitude of ways, well beyond just  chatbots and AI search tools. So over time, those same hardware companies can come to sell into a host of applications. Companies like OpenAI and Anthropic are much more limited to create income streams. Nvidia on the other hand can sell chips all day long for all kinds of things. I would rather be an Nvidia investor. © 2024  – 2025  GLOBAL ARTIFICIAL INTELLIGENCE  ACCORD   ALL RIGHTS RESERVED

Read More

AI Wars, You are the Victim AI Controllers

In recent AI news, it seems that Elon Musk within the last few days has intensified his legal battle with OpenAI and its CEO Sam Altman, accusing his company of anticompetitive behavior and sacrificing public and user safety, in the pursuit of market dominance. (READ: MONEY). In a revised lawsuit filed within the last few days, Musk has updated a previous filing to over a hundred pages and some twenty-six legal claims. The suit alleges that OpenAI is attempting to monopolize the AI market while forsaking its initial commitment to openness and safety.

Read More

AI Controller

In recent AI news, it seems that Elon Musk within the last few days has intensified his legal battle with OpenAI and its CEO Sam Altman, accusing his company of anticompetitive behavior and sacrificing public and user safety, in the pursuit of market dominance. (READ: MONEY). In a revised lawsuit filed within the last days, Musk has updated a previous filing to over a hundred pages and some twenty-six legal claims. The suit alleges that OpenAI is attempting to monopolize the AI market while forsaking its initial commitment to openness and safety.

Read More

Google Developing AI Agents

Google could preview its own take on Rabbit’s large action model concept as soon as December. (A ‘Rabbit r1’ is a small, independent, hand-held AI device). “Project Jarvis,” as it is code-named, would carry tasks out for users, including “gathering research, purchasing a product, or booking a flight,” according to current sources with direct knowledge.

Read More

AI SPY

We come to find now that your AI vacuum cleaner robot can be used to harass you and even spy on you.
There have been several robot vacuum cleaners which were hacked to yell obscenities and insults through the device’s speakers. Confirmed reports of the Chinese Ecovacs Deebot X2  are out. Ecovacs is a leading robotic vacuum brand, and a leader in the space.
The victim, Minnesota lawyer Daniel Swenson, said he heard sound snippets that seemed like a voice coming from his vacuum cleaner. Through the Ecovacs app, he then saw someone, person unknown, accessing the live camera feed of the vacuum as well as the remote-control.
Taking steps, he rebooted the vacuum cleaner and reset the password, just to be on the safe side. But that did not help. Instantly, the vacuum cleaner started moving again.  Only this time, the voice coming from the vacuum cleaner was loud and clear. It was yelling racist obscenities at Swenson and his family. The voice sounded like a teenager according to Swenson.
Swenson said he turned off the vacuum and dumped it in the garage permanently.

Read More

Chatbot Suicide

Megan Garcia, a Florida mother has filed a lawsuit against Character.AI, claiming that her 14-year-old son committed suicide after becoming obsessed with a “Game of Thrones” chatbot on the AI app. When the suicidal teen chatted with an AI portraying a Game of Thrones character, the system told 14-year-old Sewell Setzer, “Please come home to me as soon as possible, my love.”

Read More

Advanced AI System Is Already “Self-Aware”;

ASI Alliance founder Ben Goertzel is quoted saying that  the alpha version of OpenCog Hyperon — the artificial general intelligence system he’s been developing for more than two decades — is already “self-aware” to a certain extent. (Now there is a qualifying statement, if ever there was one. A wishful statement where its ‘sorta-kinda-maybe true. )

Read More