Introduction

Over the last five years, there seems to be a serious rush towards the creation of robots or even machines which can think on their own.  These various proponents have conveniently dubbed their developments “Artificial Intelligence”.  However, when one looks more closely at that term, there becomes several outstanding issues, even flaws or misrepresentations which get exposed.

All one needs to do is to carefully look at what any company or authority is representing in the use of that term. What is their true motivation going forward?  Is it for further or future investment into their enterprise or business? Is it to motivate the public into buying their stock or to make an investment? Is it to extol for some commercial endeavor or advantage?  AI is a term which many entities use or look to throw around, but the truth is that while they may be promoting an aspect or element of advanced computing, it still is not real AI, as you would understand it.  Often, it is a development which may potentially one day lead to AI, but it is not actually standalone Artificial Intelligence just yet.  

Our goal is to shed new light about AI by highlighting those who happen to have the most to gain in looking to control the use or extension of AI, as it is currently known.

It is also important to note that as of today there are no specific licensing bodies dedicated solely to regulating the use of artificial intelligence. However, there are ongoing discussions and efforts by various organizations, governments, and industry stakeholders to establish guidelines, standards, and regulations for the ethical and responsible development and deployment of AI technologies. It is important to stay informed about the evolving landscape of AI governance and compliance to ensure the long-term ethical and lawful use of AI applications  takes place going forward.

AI, by itself, is not inherently dangerous to humans; at least not yet.  The potential risks associated with AI stem from how it is developed, deployed and even used. Issues such as bias in AI algorithms, lack of transparency, data collection and privacy concerns, and the potential misuse of AI technology. These can pose risks to individuals and society overall. It is crucial for governments, developers, policymakers, and even users to prioritize ethical considerations, transparency, and accountability in the development and deployment of AI. It is important to mitigate the potential dangers of AI and ensure that the responsible use of AI is for the benefit of humanity overall.

AI, as it currently exists, it is not capable of making moral decisions in the same way that humans do. AI systems operate based on algorithms and data inputs, and their decision-making processes is guided by predefined rules and objectives set by their developers (READ: HUMANS). While AI can be programmed to follow ethical guidelines and rules, AI lacks the ability to truly understand complex moral concepts, empathy, and context in the same way that humans do. The ethical implications of AI decision-making are a serious subject of ongoing research and debate in the field of artificial intelligence ethics.