Button Text
Home
arrow
Blog
arrow
AI Trends
Jan 28, 2022
Min Read

11 Things We Should All Know About Artificial Intelligence

Share on TwitterShare on Twitter
Share on TwitterShare on Twitter
Share on TwitterShare on Twitter
Share on TwitterShare on Twitter
super.AI
Chief AI for Everyone Officer
SUMMARY

A recent KPMG study, Thriving in an AI World, found that the COVID-19 pandemic has accelerated artificial intelligence (AI) adoption, with the financial services (+37% YoY), tech (+20% YoY), and retail (+29% YoY) industries seeing the most growth. However, the business and IT decision makers that took the survey also expressed concerns about how quickly AI is moving.

For instance, roughly 50% of the respondents from the industrial manufacturing, retail, and tech sectors say artificial intelligence is moving faster than it should in their industry. Smaller companies (63%), respondents with higher levels of AI knowledge (51%), and younger (Gen Z and Millennials) business leaders (51%) also showed heightened concern around the rate at which AI and machine learning technologies are gaining traction.

Naturally, KPMG also found a surge in support for regulating artificial intelligence. The vast majority of business leaders across industries say the government should be involved in regulating AI, with retail (+24% YoY), financial services (+27% YoY), and tech (+17% YoY) seeing major spikes in support.

Against a backdrop of simultaneous acceleration in adoption and growing concern around the pace of adoption, it has never been more important to understand artificial intelligence. This doesn’t mean becoming a machine learning expert and writing complex AI algorithms, but rather building a high-level understanding of how AI works, the ways the technology is evolving, and the limitations of AI that don’t always get the attention they deserve.

This article covers 11 topics that help us better understand the future of artificial intelligence. Rather than discuss flash in the pan trends that may fall out of the zeitgeist in a matter of months, we’ve identified substantive issues and themes that not only paint a picture of where AI is going, but help illustrate the true utility of artificial intelligence in its current state.

#1: Commoditization of artificial intelligence models and the proliferation of AI applications

The biggest AI trend to pay attention to in 2022 is two-fold: (1) the commoditization of artificial intelligence models and (2) the rise of AI applications. Over the last 15-20 years, AI has transitioned from infrastructure and technical applications to more end business applications.

Due to the large amount of hype surrounding AI, people are now renewing their focus on business results–much like they would with any other technology. Rather than talking about named entity recognition, for instance, because it sounds impressive, people want to know: How is this going to impact my business? Instead of isolated R&D groups working in silos, AI projects will become integrated into entire organizations and focused on return on investment (ROI). Rising ubiquity of AI and an emphasis on tangible functionality (rather than ethereal potential) will advance the commoditization of artificial intelligence models.

The second component of this is a shifting focus from AI models to AI applications. The biggest differentiator between a model and an application is that applications are ROI driven. Companies will stop approaching AI as if it has unlimited potential, and instead start identifying a limited set of use cases to pilot, ensuring there is C-suite buy-in, and tying projects to concrete business outcomes. The end goal of this is to stop using AI in a vacuum, and begin combining it with existing investments such as human workers, robotic process automation (RPA), business process outsourcing (BPO), and more.

#2: Rise of AI marketplaces

The rise of artificial intelligence marketplaces is going to be a major trend in 2022 and beyond. Communities and marketplaces are cropping up around the core building blocks for AI solutions: models. For example, Hugging Face and Tensorfow Hub act as marketplaces for trained AI models. However, GitHub has been the primary modality for launching machine learning projects even though it was designed for software—and not for AI.

Software projects are mostly just code, and so everything in GitHub was designed around code. The platform makes it very easy to track changes across code and manage software projects, but it lacks more sophisticated functionality that is necessary for machine learning and AI. This is because AI isn’t just code, it includes important extras that GitHub wasn’t designed for:

  • Data: Machine learning models change based on the data they are fed. This makes data versioning an essential component of AI projects. Whereas code is developed from a software engineer’s head, machine learning algorithms are implied through data. GitHub wasn’t designed with data in mind as a core component of a project, and so it isn’t accounted for in the way machine learning models need it to be.
  • Model parameters: If the code is the design of how to build a human brain, the parameters and models are how to build your exact brain. This is really important because the same code base used to train a model can generate hundreds or thousands of different parameters. And if a different dataset is used it will generate different parameters still. The reason why GitHub is not suitable for machine learning is because ML isn’t just code, and that’s what GitHub was made for.

For more information about the importance of building a platform tailored for machine learning and AI, check out our blog: GitHub is Bad for AI: Solving the Machine Learning Reproducibility Crisis.

#3: AI Safety

As mentioned above, with rapid adoption of AI also comes rising concern about the pace of change and growing support for government regulation. At the center of all of this is the concept of AI safety, which has a question at its core: Can you build AI with guarantees?

AI safety is about creating AI that can be trusted and establishing confidence in AI behavior and robustness, supporting the successful adoption of AI in society. Achieving these goals relies on achieving AI safety via fairness, accountability, and explainability. The key is to design AI with these principles in mind, then use precision modeling to embed them into products. While this solution might seem simple, it’s tough for many teams to execute.

If you would like more information about AI safety, check out our blog: Is AI Safe?

#4: Unstructured data processing

A broad transition in focus from structured to unstructured data processing (UDP) will define AI in 2022. Structured data processing has dominated automation and RPA for the last few years however, the market is running out of room with structured data. Consider that an estimated 90% of the world’s data was created in the last two years, and 80% of it is unstructured. Unlike structured data, unstructured data doesn’t follow a pre-defined model or schema, making it far more difficult to process and analyze.

Because unstructured data is produced in greater volumes than structured data, finding ways to structure it so that it can be easily processed and analyzed by machines will become a requirement for remaining competitive. Operating on a small amount of data, which is the norm for many of the largest companies in the world, will no longer be viable when rivals are taking advantage of unstructured data using AI. 

A rise in demand for unstructured data processing will be driven both by the benefits of existing automations powered by structured data plateauing, and organizations waking up to the reality that most of their data is locked away in unstructured formats.

For more information, check out our educational page on unstructured data processing

#5: Hyperautomation

Hyperautomation is an important concept to understand as its integral to the value proposition of artificial intelligence and unstructured data processing. Until recently, when people were implementing automations they had to make a choice between robotic process automation (RPA) and artificial intelligence (AI). Hyperautomation is the amalgamation of these technologies, where AI, machine learning, humans, RPA, and more work together to automate increasingly complex tasks.

For non-technical people, this will result in more advanced robotic process automation that will enable employees and managers doing office work to further automate their own work without knowing how to code. As a consequence, employees will be able to get more work done because most of their monotonous tasks can now be automated. Of course, if employees and management without coding expertise can use AI and ML to automate their jobs, those with technical expertise can also apply the same tactics.

One way to understand this is to think about the types of tasks we can automate now and the types of tasks we’ll be able to automate with hyperautomation. If a human can complete a task in less than a second, there is a really good chance that a relatively rudimentary technology can automate it. The longer a task takes a human to solve, the more difficult it becomes to automate.

Around 2019 it was possible to automate tasks that take humans roughly 10 seconds to solve. With the merging of humans, AI, and software in 2022, it will become possible to automating complex tasks that take humans around five minutes to solve. Obviously, this is an oversimplification, but with hyperautomation we’ll get to the point where we can automate tasks with over 98% accuracy that would take humans five minutes to solve. Which represents a really is a remarkable advancement in artificial intelligence technology.

If you’re interested in reading more about hyperautomation, check out our blogs on the topic:

#6: Further specialization of roles

As AI adoption grows and its applications become further specialized, roles will also become further specialized. Rather than simply hiring data scientists or a data engineers, companies will be seeking model creators, model tuners, model deployers, model monitors, and other more specific roles when building AI teams. Additionally, there will be further specialization of the model creation process, from discovering new use cases to labeling and training data. We may also see individual contributors outsourcing their services on platforms. For instance, employees and managers could use no-code AI platforms to automate their own work. As a consequence, employees will become more efficient because most of their repetitive work will be automated—allowing them to focus on areas where they can make the greatest impact.

#7: Centralization of data and machine learning platforms

Data sprawl has become a real and costly problem inside organizations that is harming innovation and creating data silos. Each team has their own agenda, and they aren’t necessarily cooperating or collaborating effectively.

A counterintuitive intuition that is not appreciated in the world of AI is that the broader and this is very counterintuitive, but in almost every other field, say marketing, for example, the more broad your marketing messages, the more shallow they become. Even in software, the more broad your software applications are, the less depth they offer. In machine learning, it's actually the opposite. 

The broader you go, the deeper you also go. And again this is very unintuitive. I think some people have started to realize that in 2021, that there is a real advantage and incentive to having a centralized data platform or machine learning platform. And we're seeing a lot of requests for us to use super.AI technology as a central platform, as people realize this kind of counterintuitive property of AI. 

#8: Rise of low-code/no-code AI and AutoML

The rise of low-code and no-code AI, which are tool that allow non-technical users to build, test, and deploy AI applications without learning to code, is being driven by the massive surge in AI adoption mentioned earlier in this article. The rapid pace at which artificial intelligence technology is being adopted by small and large businesses alike is throttled by an absence of needed talent. Low-code, no-code, and AutoML helps overcome this talent gap by giving people that lack technical skills the ability to leverage sophisticated machine learning models to create powerful automations powered by unstructured data. These tools may not perform perfectly, but the idea it to not let perfection get in the way of progress.

If you have an idea for how AI might be useful to your or your business but aren't sure where to start, book a meeting with an unstructured data expert at super.AI.

#9: Larger and smaller models

An inevitable AI trend in 2022 is the release of bigger and bigger models. There will likely be a 100 trillion parameter (e.g. GPT-4) model that comes out that allows us to continue doing few-shot, one-shot, and zero-shot learning, which are sub-areas of machine learning related to the number of samples used to train a machine learning model. This advancement is likely to drive further AI adoption as more people begin to realize the potential of the. technology.

Additionally, there is likely to be a rise of tiny AI. These will be micro-models that run on small devices (e.g., IoT hardware). In some instances, more data isn't necessarily better. Micro-models help reduce the challenges associated with sourcing and annotating large amounts of data, and will continue to gain in popularity as their benefits become better established.

#10: Increased interoperability of AI models

As models continue to grow in size and number it will become essential to have a protocol, or some other scalable method, for models to communicate. This is called the interoperability of models or the composition of models, and the goal is to allow models to integrate with each other to get more out of them.

This is something we spend a lot of time on at super.AI, and it's even potentially a path to artificial general intelligence. What makes society powerful is that we can all learn from each other and connect with each other, this is why we saw such a big economic boom during the advent of the printing press and the internet. The same thing will likely be true in AI. Right now, AI is largely powerful models making predictions. But once all the various models begin collaborating, there may be another renaissance.

#11: Creative AI

Hollywood already uses AI to procedurally generate worlds and create faces with generative adversarial networks. For example, consider the most recent Matrix game, which may be more accurately described as a tech demo. It uses AI systems to drive the characters and vehicles, while procedural systems built using Houdini generate the environment. The creators have merged ultra-realistic 3D modeling, ultra-realistic lighting, real time physics, and AI to demonstrate where the future of video games and film products are heading. 

AI is affecting every part of society, even creative fields that people never thought it would impact. Moving forward, we're likely to see AI used to design art, food, and a lot of other creative applications that will surprise us all.

Problems facing the future of artificial intelligence

It would be irresponsible to discuss the future of AI, and all its glorious potential, without honing in on some of the biggest issues facing the technology. To address this topic we’ve selected two specific issues to focus on: the social impact of AI and bias in AI.

The social impact of AI

According to McKinsey, by 2030 up to 375 million workers (14% of all workers) will have to move out of their current occupations as their jobs will be displaced by automation. Researchers at Oxford made an even bolder estimation, predicting that 47% of jobs will be automated out of existence by 2034. While this is incredibly difficult to predict with a high degree of accuracy, it is clear that artificial intelligence and machine learning will be hugely disruptive to the economy and the workforce.

To help contextualize the social impact of AI, consider the idea that a lot of what we know in the world we’ve only picked up within our lifetime. However, by zooming out and looking at the entirety of human history, instead of just the sliver of it unfolding in our lifetimes, patterns begin to emerge. New technology has been replacing jobs for as long as there has been new technology and jobs.

In the 1800s, around 90% of the population lived and worked on farms. Today, it is less than 1%. In the 1800s, each farm grew enough food per year to feed three to five people. Today, the average farmer in the U.S. grows enough food to feed 155 people. This example illustrates that a loss of jobs due to new technology doesn’t happen in isolation, and brings with it many benefits. Although the vast majority of farming jobs have been rendered obsolete, the industry on a whole is considerably more efficient, producing more food with less resources and with greater stability. 

Because we haven’t seen technology displace workforces in our lifetimes, the prospect of it sounds scary and onerous. It helps to understand that this isn’t in fact a new phenomenon, but has occurred regularly throughout human history. It also doesn’t mean there won’t be new opportunities that emerge as a result of technological progress. A more valid concern than AI and machine learning taking away jobs from humans is the rate at which this will happen. The principle of telescoping evolution posits that evolution doesn’t happen linearly, but gets faster with each iteration. 

For instance, it took roughly two billion years for life to form on earth, six million years for hominoid evolution, and roughly one hundred thousand years to get humans as we know them today. Moving beyond human evolution we see a similar trend with technological evolution. The agricultural revolution took over 10,000 years, then the scientific revolution took roughly 400, and the industrial revolution just 150 years. What’s often called the fourth industrial revolution, or the technological revolution, has happened within a hundred years. With AI, it's possible for revolutions to happen in tens of years. 

Previously, people may have had to change jobs once in their lifetimes. As AI continues to take over all kinds of jobs, even creative jobs never thought possible to automate, it is understandably making people a bit uncomfortable.

Bias in artificial intelligence

Gartner predicts that through 2030 85% of AI projects will provide false results caused by bias. Additionally, people are increasingly concerned about algorithmic bias in AI systems that can cause gender bias, racial prejudice, and age discrimination. These are valid issues to worry about, and to begin attempting to solve them we have to start with a question: How does AI learn to make decisions?

As mentioned before, machine learning algorithms are implied from data. The issue isn’t that artificial intelligence is inherently biased, but rather that the model in question had bias in its training data. It is also important to note that most data is created directly by humans or derived from human activity, and therefore the bias baked into the data is a reflection of the bias that exists within society. Put differently, bias in AI is more a reflection of human beings that it is a reflection of artificial intelligence. 

With this in mind the question becomes: How do you fix the training data? Unfortunately, to do this comprehensively, bias would have to be removed from society. While this is a noble undertaking, it extends far beyond the scope of the blog article or the capabilities of any one individual or organization. Fortunately, in isolated applications it is possible to remove bias using tactics like AI frameworks, AI explainability, and AI governance, which as mentioned previously all fall under the umbrella of AI safety.

Additional artificial intelligence resources

At super.AI our goal is to make artificial intelligence accessible to everyone, and is something we carry with us in everything we do. From the technology we build to the resources we curate, our aim is to make people feel comfortable learning and leveraging AI and machine learning. For more information about artificial intelligence, check out the following resources:

Other Tags:
AI Trends
Share on TwitterShare on Twitter
Share on FacebookShare on Facebook
Share on GithubShare on Github
Share on LinkedinShare on Linkedin

Get a customized demo with your documents

Book a free consultation with our experts.

You might also like