Button Text
Home
arrow
Blog
arrow
AI Trends
Jan 13, 2022
Min Read

Is AI Safe?

Share on TwitterShare on Twitter
Share on TwitterShare on Twitter
Share on TwitterShare on Twitter
Share on TwitterShare on Twitter
super.AI
Chief AI for Everyone Officer
SUMMARY

At the Consumer Electronics Show this month, John Deere unveiled an autonomous tractor that quickly courted a lot of press. Powered by a smartphone app that gives orders, John Deere claims that its 8R tractor can reach fields, plow seeds and soil, and avoid obstacles – all without  a human present. There’s no doubt that John Deere’s CTO, Jahmy Hindman, is correct when he says 8R’s debut marks “a monumental shift” in AI’s ability to transform work as we know it. 

But tech like 8R raises lots of questions about whether we’re ready to abandon work as we know it in favor of new, unproven tech. If John Deere wants 8R sales to soar, they’ll need to convince users that their autonomous tractor is safe. When it comes to addressing AI safety with the world, John Deere might find that building this tractor was the easy part.

A score of studies conducted over the last few years consistently show low consumer trust in AI like autonomous vehicles. In a study published from February 2021, AAA reported that “The majority of drivers (80%) say they want current vehicle safety systems, like automatic emergency braking and lane keeping assistance, to work better and more than half—58%—said they want these systems in their next vehicle.” This suggests something of a paradox: Despite the vast majority of people seeing room for improvement in existing vehicle safety systems, many are still open to trusting intelligent machines in situations where safety is everything.

You might not be building autonomous tractors, but you can still learn some lessons from reservations about AI safety in such products. Highly publicized missteps with AI mean that skepticism is high amongst Americans, even if they say its pros outweigh cons. If you want your own AI business to succeed this year, it’s not enough to debut an awesome product. You will need to prioritize AI safety throughout the product lifecycle and share your process with the public. That transparency shows consumers that you have their best interests at heart.

What is AI Safety?

Like the AI umbrella term itself, AI safety runs the risk of being so nebulous that it means nothing. The Stanford Center for AI safety declares that its mission is “to develop rigorous techniques for building safe and trustworthy AI systems and establishing confidence in their behavior and robustness, thereby facilitating their successful adoption in society.” The Center’s projects include using:

  • Deep neural networks (DNNs) to avoid collisions amongst small drones.
  • Adaptive stress testing (AST) to validate safety in autonomous control systems.
  • Robotics to teach systems how they can learn and repeat human management.

Achieving these goals relies on achieving AI safety via fairness, accountability, and explainability. The key is to design AI with these principles in mind, then use precision modeling to embed them into products. While this solution might seem simple, it’s tough for many teams to execute. Super.AI CEO, Brad Cordova, has more on why AI safety is one of the biggest problems facing artificial intelligence:

Common approaches to AI model safety

AI governance involves creating process definitions and clear systems of accountability for developing and implementing AI systems within organizations. This may help with satisfying regulatory requirements through the capture and management of AI model metadata, and also ensures the outputs from AI systems can be trusted.

Additional artificial intelligence safety concepts such as AI explainability, meaning a given model’s inner workings can be explained from input to output, may be components of an AI governance policy. There are also technical indicators that can be used to monitor AI model accuracy and notify the correct people when a model begins to malfunction.

For example, covariate shift is a type of dataset shift commonly seen in machine learning that can occur gradually over time or suddenly after deployment. It refers to the change in distribution of the input variables present in the training data and the test data, and it can cause an AI model to produce inaccurate outputs. By building dataset shift detection into a broader AI governance policy, organizations can get ahead of potential issues with model accuracy.

Beyond this, it is important to understand that all AI models have limitations. Neural network model uncertainty estimations can be used to approximate the uncertainty of parameters within a model, as well as uncertainty in the observed data. Uncertainty estimation sheds light on the limitations of intelligent systems, and presents an opportunity for the humans creating them to come up with novel solutions.

What are some barriers to building AI safety?

Concepts like AI safety are top-of-mind today, but this conversation is still relatively new. As AI training techniques like reinforcement learning coincided with the quick rise in data creation over the last decade, many prestigious brands brought products to market before assessing them for key ethics metrics like fairness.

From HR algorithms that were biased against women to facial detection software that led to officers arresting the wrong people, it became clear that when AI products aren’t built with fairness, accountability, and explainability in mind, these products can cause deep harm to users.

Unstructured data can be a big barrier to building AI that’s fair and safe. Structured data is stored in a predefined format and is easier to analyze using AI techniques. If you can map data into predefined fields, it is most likely structured data. The challenge is that 80 to 90% of the world’s data is unstructured, which means it’s not organized in a pre-defined way. When data is unstructured, it’s harder for intelligent systems to process and analyze.

How can teams build AI safety into products?

Although unstructured data is tougher to work with, it’s not impossible. Several training techniques can help AI engineers build safety into their products. They include:

  • Personally identifiable information (PII) redaction, which anonymizes private details like photographs, licenses, and social security numbers.
  • Corrosion detection, which uses computer vision to find objects in images and videos.
  • Document extraction, which extracts structured data from larger data that is semi-structured or unstructured.

Techniques like these enhance AI safety in several ways. First, they increase safety for those whose information is in the datasets you’re using. If a data breach occurs, techniques like PII redaction keep data from falling into the wrong hands.

These techniques also teach algorithms to recognize patterns by exposing them to large amounts of data. This is crucial for building safe AI because algorithms must see examples of what something is along with what it isn’t. For example, if you want to teach an algorithm to recognize bananas in pictures, you must also show it pictures of apples so it doesn’t misclassify the latter as the former.

Are enough teams prioritizing AI safety?

Since there’s conclusive data that people want more transparency and trust in AI, increasing AI safety should be a key priority for all companies building these products. This will involve using a range of quality control techniques to apply AI within real-world scenarios. This sounds simple enough, but has historically been a barrier to AI adoption. Algorithms often perform differently in controlled settings than they do upon deployment. So, there’s a risk that even if you use the techniques covered in this article, things might go sideways once a product is out in the wild.

AI solutions built with safety in mind

At super.AI, our goal is to make artificial intelligence accessible to everyone and automate boring work so people can focus on what matters most. Although AI safety is an ongoing research problem that is far from solved, we spend a lot of time considering how to build checks and balances into the systems we create. Our technology has over 150 quality control mechanisms in place to enhance AI safety and guarantee output. For more information on AI safety, check out the following resources:

  • Download our Data Security Whitepaper to learn more about how super.AI implements security measures and handles customer data.
  • Read our blog, What Exactly Is AI?, to gain a better understanding of the limitations of artificial intelligence.
  • Book a personalized demo to discover how AI can benefit your application scenario.
Other Tags:
AI Trends
AI Safety
Share on TwitterShare on Twitter
Share on FacebookShare on Facebook
Share on GithubShare on Github
Share on LinkedinShare on Linkedin

Get a customized demo with your documents

Book a free consultation with our experts.

You might also like