31Jul

I love technology and I’ve been enjoying surfing another wave of development in the AI space. It’s fun and exciting, although safety questions pop up at the end of every conference and meetup I’ve attended recently. 

I don’t remember these kinds of questions coming up when the internet became a thing. No one was expecting the internet to attack them somehow, someday, although it has been the catalyst for many changes. 

While we’ve been concentrating mostly on future opportunities, it’s essential to also heed possible disruptions. Being aware and prepared for disruptions may also be key to leveraging the opportunities, ensuring we aren’t metaphorically or literally knocked over by a self-driving car in the process.

This article covers the safety, disruption and ethical subjects from the perspective of the near future.

Robot Gold Prospector

The Gold Rush

The excitement is understandable. It’s a gold rush with competition from Microsoft (OpenAI), Google, Meta, Apple, Nvidia, Baidu, Tencent, Alibaba, Tesla (Elon Musk) and others around the globe. Elon says he’s rattled, though xAI and Optimus are still full steam ahead, whatever icebergs come. 

Like any gold rush the profit and allure of fortune is attractive. The field is glamorous and the stakes high. Like the previous California Gold Rush of 1848 (or Australia in 1851), it’s marked by enthusiasm, speculation, and an influx of people seeking to strike it rich.

The physical gold rushes spurred economic growth, infrastructure and population expansion, countered by environmental degradation, displacement of indigenous people and social conflict. 

In our AI gold rush we have wonderful opportunities, though also the potential for negative and possibly even dangerous consequences. 

Know your product

It’s not even a matter of product safety as a lot of the development of AI and Robotics is in the realm of research and development. If you buy a drill from the hardware store, depending on the country you buy it in, there is a whole raft of consumer protections. 

Previous to that, there has been a lot of discussion about product safety in the design, knowing it will be in the hands of consumers. There’s the safety of the workers in the manufacture (depending on the process). There are laws around research for things we already know are dangerous (e.g. carcinogens, radioactivity etc). 

Depending on the country there are general and ethical guidelines around research to try and deal with “unknowns”. Such things as Informed Consent, Continual Monitoring, Privacy, Data Manipulation, Environmental, Conflicts of Interest, Transparency, Fairness, Accountability, Antitrust and Copyright. 

Considering that lawsuits seem to have failed to protect copyrighted materials such as image and text works, being used in training data, obviously there is a question around the moral framework of AI and Robotics research. It’s also unclear if AI can even hold a patent or copyright and thus be subject to infringement laws.

Currently if an AI causes harm, who is legally responsible? The programmer, the user, or the AI itself?  

The European General Data Protection Regulation (GDPR) includes provisions for AI systems that make decisions about individuals with provisions on how AI makes decisions and allows individuals to opt out of automated decision making.

That is roughly the state of the current law.

Are AI and Robotics an atomic problem?

If the laws are hard to apply to the unknowns, are the consequences as drastic at the discovery of fission? Figures such as Stephen Hawking (deceased) and Elon Musk frame it as being potentially more dangerous than nuclear technology, though are these arguments rational?

The groundbreaking discovery of fission was made in December 1938 by radiochemists Otto Hahn and Fritz Strassmann, aided by Lise Meitner, in their Berlin laboratory. They were experimenting with neutron bombardment when they found that uranium nuclei significantly transformed, splitting into two nearly equal segments, releasing both energy and neutrons. On August 6, 1945 the bomb was dropped on Hiroshima (with World War as the global context).

Although the atomic bomb is a direct weapon and no one rational is keen to use them, AI and Robotics may be an existential threat. They may have the potential to transform society and the economy in ways we can’t fully predict; including the possibility of cyber warfare or potential job losses due to automation. 

In that way AI and Robotics are not directly “here comes a new weapon”, and thus the comparison is hyperbola. However when compared to climate change, pollution and other creeping problems, is this an issue that is going to quickly sneak up on humanity in ways we can’t predict? 

In that case we’d have to examine the potential changes that AI and Robotics could bring as well as the mechanisms to adjust to the change.

Interestingly back to Elon, one of his largest concerns is that we may become so apathetic that humanity falls over as we just forget how to do things. If an AI does become capable of making its own decisions, it might not care about us and thus do something not in our interest. 

We need to be thinking about it.

What do we need to consider?

No matter what the positive and negative effects of AI and Robotics technology, with change there are two fundamental questions:

  1. Can society adapt and consume the changes?
  2. Can society survive the consequences of the introduction of this technology? 

A society can adapt and might or might not be happy about the changes. Alternatively a society might be broken by the change. I would give the example of the internet having many positive and some negative consequences. 

The right thing?

In a perfect world everything would be considered. Technology would follow the rules. Although Atomic Bombs, Thalidomide, Asbestos, DDT, Genetically Modified Organisms, Data Breaches and Social Media Algorithms may disagree. 

In all these cases, considerations regarding the detrimental impacts and ethical implications of these discoveries were initially overlooked, leading to unforeseen adverse consequences. It’s important, as much as possible, given even knowing about them, to consider risk assessment, ethical oversight, and regulatory safeguards. 

This exposes two imperfect (antisocial or unexpected) paradigms we may have to deal with:

  1. It happened without our knowledge or consent, and now we have to deal with it. 
  2. The people unbagging the cat did not care.

Your changing world

  • Economic Growth:  “Australian organisations are predicted to spend around $3.6 billion on artificial intelligence systems in 2025”, according to research from IDC. Further “spending growth represents a compounded annual growth rate (CAGR) of 24.4 percent from 2020 to 2025, as organisations increase their investments”. Currently Robotics is, “projected to reach US$691.50m in 2023, with slower growth at 0.05%”, according to Statisa. The Australian Government allocated AUD $101.2 million over five years, in the Federal Budget to support small businesses integrating quantum and AI technologies. This will cover expansion of the National AI centre, establishing a  Critical Technologies Challenge Program and establishing an Australian Centre for Quantum Growth to support the commercialisation of Quantum computing. From the previous government this also included programs at the Department of Industry, Innovation and Science, CSIRO and the Department of Education and Training.
  • Jobs: Cisco has put out some good research showing the potential impact of the adoption of AI technology in Australia. They estimate that by 2028, 630,000 Australian workers will lose their jobs to technology. Although there are articles that are upbeat about the opportunities that will supposedly replace or exceed this number of job losses, there are no exact figures. McKinsey and Company said in March 23, “Without increased employment transition support, increased job churn could see Australia’s unemployment rate temporarily spike by up to 2.5% during the peak of the transition.” Also some of the jobs affected first might be, customer service, clerks, accounting and finance, graphic design, trading and investment, market research, media, technology, admin and middle management. The jobs created may be AI and Machine learning, Sustainability, Business Intelligence, Information Security, Data Scientists, Robotics Engineers. I think you can see the disparity and the farcical nature of the premise there. I think some research into what happened to coal miners in the United Kingdom during the 1980s-90s, might give rise to some contemplation. The jobs shifted and the skills required are not likely to be equivalent. That’s before the release of humaniform robots. Cisco’s model predicts that because there is so much productivity produced by the economy that more spending on goods and services will happen and thus more jobs, they predict that the ‘business services’ sector will be the most severely affected. 
  • Innovation: As companies compete this will lead to technological innovation. 
  • Infrastructure: Expanding AI technologies will require increased computational power and data storage capacity, leading to more data centres and servers. There may also be a need for more sophisticated software, platforms, and systems to develop, implement, and manage AI technologies. Energy consumption will also need to be considered. 
  • Education: AI technology may inspire more students to pursue studies and careers in computer science, data analytics and machine learning. There’s also the fear of dependence where, as robots take over more tasks, people could lose important skills and become overly dependent on technology. 
  • Quality of Life: AI has the potential to drastically change the quality of life, through new medical treatments, personal assistance, helping with complex problems, automating tasks, improving energy efficiency and more. Or to make you poor. 
  • Social and Cultural Exchange vs Global Competition/Collaboration: Advances in AI may promote collaboration and knowledge exchange among nations and research entities. Also they might promote fierce competition and an AI arms race. Maybe both. The internet is a good example where although the internet was a DARPA initiative, the web was born at CERN and now the internet usage is global, largely driven by the populace of nations. 
  • Dangerous Situations: Robots (or robots piloted in conjunction with humans) may help with dangerous, complex or physically impossible jobs for humans. For example, space exploration, deep-sea explorations, pollution cleanup or disaster recovery. 
  • Security and Privacy: AI technology could lead to enhancements in security measures,  surveillance, cybersecurity. It could also lead to security issues both in terms of data privacy and potential uses of robots for malicious purposes. There’s a risk that AI could be used to enact surveillance, potentially by governments or large corporations, leading to a loss of personal privacy.
  • Inequity: An AI/Robotic Gold Rush may produce a significant economic boom for a few people or companies, whilst leaving the majority with little economic benefit (or worsening their social position).
  • Quality: Large Language Models can be made from other LLM’s, so the price is dropping and the capability of smaller companies to produce them is increasing. Bigger companies such as Google, Microsoft (OpenAI), Meta and Apple have so far cornered the market. The desire to release ‘product’ rapidly, may come with ‘corner-cutting’, bias or a rise in errors.
  • Violation of Laws: Companies may violate existing regulations or fail to appropriately develop new ones to control and manage the use of these technologies. This has already been seen with Artists fighting against the non-consenting use of art to train visual models. This is probably true with audio models as well, where researchers are training models to generate music. 
  • Ethics: After years of digital piracy and the much lower paying streaming model, now your created works are ‘remixed’ to remove copyright. Possibly your likeness, or voice used, which might harm you or use your prime asset if you are an actor.  The rush towards robotic advancement could potentially overlook serious ethical concerns related to AI and robotics, such as privacy, job displacement, and issues of control. Who is held accountable when an autonomous machine makes a decision that leads to harm? Ethical issues include consent and privacy in data use, transparency in AI decision-making, and the potential devaluation of human skills and interaction in favour of automation. This may affect social trust.
  • Algorithm Bias: AI algorithms are only as good as their input data. If the data contains biases, the AI will learn and perpetuate those biases. This could lead to discriminatory practices and exacerbate social inequalities.
  • Environmental Impact: A mass production of robotics might cause environmental concerns due to the materials used and the energy needed to manufacture and operate them.
  • Over-hyping: There are risks in creating unrealistic expectations, and if these are not met, it could result in a kind of technology ‘backlash’. The example is fusion power where the energy output stated in papers is the net energy generated rather than the total and thus a fib. 
  • Accidents, Dangers and the Unknown:  For example, without clear guidelines and safety standards, robots could inadvertently cause accidents or injuries due to malfunctions, design flaws, or unpredictable behaviour. Furthermore, as robots become more sophisticated and autonomous, there are also concerns about the possibility of robots being used for harmful purposes, whether through deliberate misuse (e.g. robots being used in criminal activities) or unintended consequences (e.g. AI systems making decisions that cause harm to humans).  
  • Crime and corruption: There’s going to be both.

The Law

The pace of change is very rapid so it is unlikely that lawmakers will be able to keep up or as it is a global issue, come to a global consensus. 

The Law is a system of rules established by a society or government to (as much as possible) collectively regulate behaviour, balanced against a free society. They exist in a hierarchy with the highest principles enshrined in a country’s constitution. The principles and values come from cultural, religious, philosophical or political beliefs. Laws also come about as a response to emerging issues, needs and situations that require official regulation.

That means all people are responsible for contributing to the law. This is what we collectively think is right, so we can operate a society. 

Isaac Asimov I, Robot Panther Science Fiction Cover
Isaac Asimov I, Robot Panther Science Fiction Cover

The Three Laws of Robotics

Have laws around the developments of Artificial Intelligence and Robotics ever been considered before? Yes they have.  

At 12, I dived into Isaac Asimov’s “I, Robot”, a compilation of fictional tales about the development of robots and their integration in society, (published on Dec 2, 1950). It was followed by, “The Rest of the Robots”. Asimov, a renowned Science Fiction author, also penned the Foundation Series, The Caves of Steel, Galactic Empire, and Robot series, eventually merging these works through later Foundation series books largely based on the existence of robots.

His robot stories prominently featured the Three Laws of Robotics, suggesting that for seamless integration of robots into society, certain regulations guiding their conduct were essential, otherwise, robots could be uncontrollable and potentially dangerous.

It’s a romantic notion.

The three laws were stated to be from the fictional: “Handbook of Robotics, 56th Edition, 2058 A.D.” Given that it’s the 56th edition, the first may have been published in 2002 in the fictional timeline. 

The three laws are:

  • First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey the orders given by human beings, except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

It would be fair to say that a lot of people working in robotics know about these fictional laws, even if they haven’t read all of the Asimov stories.   

These fictional robotic laws entail not harming humans, active protection, and self-preservation, unless this conflicts with the prior laws. Like tools, they’re designed for safe usage, functional performance, and durability. It’s just good product design.

Asimov’s three laws of robotics, effectively used in his and then other authors’ narratives, reflect a culture’s faith in rules of new technology. Asimov aimed to move away from the cliché of robots attacking their creators.

“Knowledge has its dangers, yes, but is the response to be a retreat from knowledge? Or is knowledge to be used as itself a barrier to the dangers it brings?” – Isaac Asimov

He felt a robot in the stories would not
“turn stupidly on his creator for no purpose but to demonstrate, for one more weary time, the crime and punishment of Faust.”

The Three Laws of Robotics are a romantic conceit of another time, and not practically workable, though it does reflect that time’s acknowledgement of the sensibility and ethics around the subject.

Are we heading to laws that govern the introduction and effects of our development of AI and Robots? When you’re a writer of fictional stories, you can agree with yourself that everyone across the world is noble and on the same page. You can write that there is a world government legislating across the globe. You can write a romantic conceit in the framing, to keep the main narrative moving. 

The Internet Example

Another thing to consider is all change due to the existence and adoption of the internet and other technologies in the last 20-30 years. Those are some good examples of how it might all go down.

You could consider streaming, the gig economy, digital currency, data breaches, social media, automated checkouts, clickbait, education, general knowledge, conspiracies, online shopping and many more. 

What effect have these changes had on your personal life?

Furthermore, technology also has limitations. Emotional intelligence, critical thinking, innovation, complex problem-solving, leadership, and the ability to understand nuanced human contexts are areas where humans excel. Machine responses are different from human responses and thus the term Artificial Intelligence is a terrible brand. The outcomes are not really predictable. 

Magnus Robot Fighter - Valiant Comics
Magnus Robot Fighter - Valiant Comics

Protection?

I think there are some fundamental things that may strengthen chances of participating in the opportunities or mitigating some of the issues caused. Experiences are going to vary widely, depending on the person and circumstance. 

  1. Assessment: Assess your current career and employment and try and get a handle on changes that may affect that. Or, in your business try and assess the opportunities for the business or the impact on sales (positive or negative).
  2. Skills: From your assessment find out where skills are lacking and get them, buy them or hire them. A large period of adaptation is coming. For example, look at the current actors strike and also the job ads from Disney and Netflix regarding AI consultants.  
  3. Career Planning: Problem solving and creativity is a niche rather than tasks that are repetitive and predictable that machines excel at. Current technologies are regurgitating things that already exist. A drum machine rather than a symphony, a cheeseburger rather than a fillet mignon. Probably a lot of businesses are going to sell the cheeseburger anyway, though I’m just suggesting competitiveness and thinking. Look for opportunities created by new industries. 
  4. Regulations and awareness: Learn about AI ethics and the laws and regulations regarding AI and Robotics where possible and be aware of changes coming.
  5. Data Security: Examine and beef up your personal data security.
  6. Financial: Look at where your finances are at and plan for potential transitions.   

It could be boiled down to how you protect your job, life, identity, truth, business and community and how to equally participate in the opportunities.  

Conclusion

There are lots of laws of AI and robotics and also none. It’s an unfolding research project (or projects) that no one nation, company or individual is on top of. 

“May you live in interesting times” is an often misquoted, supposedly Chinese curse. It was actually from the “Ripple of Hope” speech by Robert F. Kennedy, in Cape Town, South Africa on June 6, 1966. He said, “There is a Chinese curse which says ‘May he live in interesting times.’ Like it or not we live in interesting times.”

The insult part, is that living in interesting times essentially means trouble and hardship. Sure there are opportunities, though the gold rush is already happening and it’s happening to you.

Leave a Reply

Your email address will not be published. Required fields are marked *

This field is required.

This field is required.