Top 3 Risks from Tech by 2040: AI Races to Invisible Cyber ​​Attacks, Check Scary Forecast

Spread the love


The technology and reach of computer systems are changing at an alarming rate. There have been amazing advances in artificial intelligence, in the mass of tiny interconnected devices we call the “Internet of Things,” and in wireless connectivity. Unfortunately, these improvements bring potential risks as well as benefits. To have a secure future we need to anticipate what will happen in computing and address it early. So, what do experts think will happen and what can we do to prevent bigger problems?

To answer that question, our research team from the universities of Lancaster and Manchester turned to the science of seeing the future, known as “forecasting”. No one can predict the future, but we can put together predictions: interpretations of what will happen based on current trends.

In fact, long-term predictions of trends in technology can prove to be quite accurate. And an excellent way to get predictions is to combine the ideas of many different experts to find where they agree.

We consulted 12 expert “futurists” for a new research paper. These are people who have long-term predictions about the effects of changes in computer technology by 2040.

Using a technique called a Delphi study, we combined the futurists' suggestions into a set of risks, along with their recommendations for addressing those risks.

I. Software Concerns

Experts foresee rapid advances in artificial intelligence (AI) and connected systems, leading to a more computer-driven world than today. Surprisingly, however, they expect little impact from two much-touted innovations: blockchain, a way to record information that makes it impossible or difficult to tamper with a system that, they suggest, is largely irrelevant to today's problems; And quantum computing is still in its infancy and may have little impact in the next 15 years.

Futurists have highlighted three major risks associated with developments in computer software as follows.

1. AI competition leading to the problem

Our experts suggest that many countries' stance on AI encourages software developers to take risks in their use of AI, a sector in which they want to gain a competitive edge. This, combined with AI's complexity and ability to surpass human capabilities, can lead to disasters.

Imagine, for example, that shortcuts in testing could lead to a flaw in the control systems of cars built after 2025 that wouldn't be detected amid all the AI's complex programming. It can also be linked to a specific date, causing a large number of cars to start behaving erratically at the same time, killing many people around the world.

2. Generative AI

Generative AI makes it impossible to discern the truth. Over the years, photos and videos have become harder to fake, so we think they should be real. Generative AI has already radically changed this situation. We hope that the ability to produce reliable fake media will improve, so it will be harder to tell if a certain image or video is real.

Anyone in a position of trust – a respected leader or celebrity – uses social media to show genuine content, but occasionally fakes that are persuasive. For those who follow them, there is no way to tell the difference – it is impossible to know the truth.

3. Invisible cyber attacks

Finally, the sheer complexity of the systems being built — networks of systems owned by different organizations, all interdependent — has an unexpected consequence. Knowing what caused things to go wrong is difficult, if not impossible.

Imagine that a cybercriminal hacks into an app used to control appliances like ovens or fridges and causes all the appliances to switch on at once. This creates a spike in electricity demand on the grid, creating major power outages.

Power company experts also find it challenging to identify the devices responsible for the spike, given that they are all controlled by the same app. Cyber ​​vandalism is invisible and impossible to separate from normal problems.

II. Software jujitsu

The purpose of such suggestions is not to sow alarm, but to allow us to begin solving problems. A simple suggestion suggested by experts is a form of software jujitsu: using software to protect and defend against it. We can make computer programs perform their own security checks by creating additional code that validates the programs' output — effectively, code that checks itself.

Similarly, we may insist that methods already used to ensure safe software operation continue to be applied to new technologies. And the newness of these systems should not be used as an excuse to ignore good security practice.

III. Strategic solutions

But experts agree that technical answers alone are not enough. Instead, solutions are found in interactions between humans and technology.

We need to develop new forms of education that transcend the skills and disciplines that deal with these human-technical issues. And governments should establish security principles for their own AI procurement and legislate for AI security across the sector, promoting responsible development and deployment practices.

These predictions give us a variety of tools to solve possible problems in the future. To realize the exciting promise of our technological future, let's adopt those tools.



Source link

Leave a Comment