AI ETHICS Insights

Melotti AI
  • AI Services
    • AI Ethics Advisory
    • Responsible AI Competency Training
    • AI Diligence Evaluations
    • Human-Centric AI Implementation
    • Ethical AI Framework Development
    • AI Governance & AI Policy Advisory
    • AI Prompt Engineering
  • About Us
  • Our Methodology
  • Insights
    • Workshops & Events
  • Contact
    • FAQs

Australia’s Voluntary AI Safety Standard: The 10 guardrails explained

10/21/2024

 
Picture
The age of AI is here – no mistaking it.

Given how revolutionary and powerful this technology is, we’ve all (kind of) known that there is a pressing need for a little guidance around how we implement it ethically into our organisations.

As a result, I’ve been advocating for businesses to have bespoke AI policies around responsible commercial AI use for a few years now.

Wait, the Australian Government stepped up to the plate!

To my surprise, the Australian Government released its own AI guidelines in 2024, which they called “the 10 guardrails”. I’m not surprised that they did it; I’m more shocked at how fast this came out.
​
Typically speaking, laws and regulations take a little time to come out from official bodies – and right now, this is only classed as “voluntary”, but it’s definitely a great step in the right direction.

So, what should businesses do about the 10 AI guardrails?

Adopting at least a few or all of these will help your organisation in three ways:
  • they will set the ground rules for safe and responsible AI use
  • you’ll be far more prepared for future AI regulatory requirements
  • they will improve your organisation’s AI maturity level

Let me take you through some of the basics to get you up to speed and show you how to be compliant with these 10 guardrails surrounding AI use in business. 
Picture

They’re “volunteer” AI guardrails! Why does my business need to care?

Good point.

​But to that, I point you to theAustralian Government’s Department of Industry, Science and Resources page, which says:

Adopting these guardrails will create a foundation for safe and responsible AI use. It will make it easier for any organisation to comply with any potential future regulatory requirements in Australia and emerging international practices.
​See the second sentence?

In other words, it can be implied that stricter legal regulations (both in Australia and internationally) that will be compulsory are incoming. You don’t want to be caught unaware, ESPECIALLY as so many businesses today are adopting AI and integrating it now.

I can imagine it will be much harder to “untangle” AI from your operations to make it compliant down the track, so it’s better to be proactive now and adopt AI in the right way to save you a lot of future grief.

Right. So, now that the “WHY” is out of the way, let’s discuss the “What are the 10 AI guardrails” and “How do I implement the 10 AI guardrails into my business” part.

Contents: ​

AI Guardrail 1: Accountability process
AI Guardrail 2: Risk Management
AI Guardrail 3: Protecting your AI systems
AI Guardrail 4: Evaluate AI performance
AI Guardrail 5: Human interventionn
AI Guardrail 6: AI use transparency
AI Guardrail 7: External feedback
AI Guardrail 8: AI component transparency
AI Guardrail 9: Maintain compliance records
AI Guardrail 10: Engage stakeholders
Picture

​AI Guardrail 1: Accountability process

This first is about establishing, implementing and publishing an internal accountability process around AI’s use in your organisation.
​
While there is no formal or legal set of regulatory compliance rules in Australia yet, I suggest making your AI policies align with your business values and/or code of ethics – that’s a great start. 

As you can imagine, this forms the AI governance bedrock upon which all of the other guardrails sit. 
So, Guardrail One is all about ensuring you have the Standard Operating Procedures, as well as the capabilities required to oversee the safe, ethical and responsible use of AI every day.
To adopt this, it’s important to put together AI policies and procedures around factors like:
  • Who is in charge of AI’s use in your business
  • How you plan to use it
  • What is considered acceptable and not acceptable use
  • What training your team needs to use it correctly.
​This is a GREAT start as it lays the groundwork for the following nine remaining AI guardrails. 
Picture

AI Guardrail 2: Risk Management

​While setting up accountability is good, minimising potential risk is just as important.

So, this AI guardrail is about establishing a system that identifies and mitigates risks based on the way your business uses it.
For instance, when you’re integrating AI into your operations: 
  • how will AI potentially negatively affect staff, customers and other stakeholders?
  • could the use of automated data collection present a danger if individuals were ever identified?
  • Is there a risk that AI bots on your website could expose harmful information?

These are the factors that must be identified and then addressed.
​
From here, carrying out regular risk assessments to uphold your risk mitigation is important, as AI use can evolve and change over time.

Picture

AI Guardrail 3: Protecting your AI systems

Just as we’re trying to protect the stakeholders around your AI tech, you also need to protect your AI itself.

​As smart as AI may seem, it can still be vulnerable to glitches, external tampering, data corruption and cybersecurity issues. 
So, as a business, it’s important to be vigilant about protecting your own tech’s integrity.
​
If your AI gets affected, the repercussions could be devastating to your business and beyond. So, have a process in place to keep your AI systems insulated, protected and overseen.
Picture

AI Guardrail 4: Evaluate AI performance

With the above Guardrails in place, it’s now time to run your platforms and keep a close eye on how AI actually works, what it’s doing and how it acts, in line with your plans and expectations - all so you can act accordingly.
​
This guardrail specifically says, “before deployment”, which is a smart idea obviously. But it shouldn’t stop there. As businesses, it’s important that we continually evaluate AI’s performance to keep a close eye on how it functions.

Why? Let’s be honest: AI tech is very new and we’ve never seen it in full action. We also don’t truly know what it’s capable of and what it can do. 
That’s particularly true when we implement AI into niche circumstances – we don’t fully know how it will act now and into the future, so it’s better to be proactive.

So, to best respond to this unknown with AI, it’s important to test, monitor and adapt accordingly. 
This shouldn’t come as a surprise, as it’s standard business practise. Just have your AI performance expectations clearly defined, as well as the level of variability you’re happy to accept before you take corrective action.
Picture

AI Guardrail 5: Human intervention

Building on the above, it’s absolutely essential that there’s always a way for a responsible human to be able to intercept, intervene and take control away from AI – at all times.

I think this is the most important of them all – human oversight.
In the rapid adoption of AI into business, some organisations may not realise how much they’re giving over complete control to their AI systems. So, if things were to go wrong, they’re not sure in the moment how to actually take back control.

It may seem like an innocent procedure, like a simple AI algorithm combining data from different channels together, however if it starts mixing these data points up due to an accidental code issue or a glitch, it could begin exposing data to the public, putting privacy at risk.
​
You don’t want to be the company that can’t address this quickly. 

You want to know for sure that a human could pull the plug and revert everything back if it was needed.
​Human oversight at all stages and across the whole system is essential so that control can be taken back if something unexpected was to ever occur. 
Picture

AI Guardrail 6: AI use transparency

There’s one great way to give more people peace of mind around your use of AI: by telling them!

You don’t have to share trade secrets. 

However, when you’re transparent about your use of AI, AI-enabled operations and AI-generated content, you can establish trust and set expectations, especially if you want people to make decisions based on this.
For instance, telling people they’re speaking to a bot can give them realistic expectations and mitigate the frustration around feeling deceived. Another example is telling people that they’re reading AI-generatedcontent so that can make their own decisions and perform their own due diligence.

(By the way, NONE of this article is written by AI. Purely me!).
​
How you disclose AI use is up to you – I would suggest in a way that people can see and acknowledge.
Picture

AI Guardrail 7: External feedback

As with anything in business, different parties all get affected differently.

So, while management may have the majority of the say around AI use, it’s important to establish a way that other stakeholders can voice their opinions or notify you of their challenge to your use of AI.

​This allows you to receive feedback from different perspectives that can actually help your business make more informed decisions around your use of AI.
For example, your staff may notice something that AI is doing that you don’t. Your customers may spot glitches or outcomes that you wouldn’t want but just don’t see.
​
Having an effective AI feedback loop will give you a more holistic view of how things are working.
Picture

AI Guardrail 8: AI component clarity

This one is more for the builders of AI systems, but it can still apply to every organisation too, in specific cases.

With so many organisations building their own AI platforms or at least being involved in the AI supply chain, it’s important that all of the organisations in the chain are transparent about how each system or component works.

The intention behind this is that if there is clear knowledge around how each AI platform was built, how it works and how it uses data, then better decisions can be made around understanding and managing risk.
This is quite important because it’s not just AI algorithms that are evolving: it’s the platforms that are getting built too. With more organisations crafting their own AI platforms within their own niches, ensuring there is greater transparency around all of those systems is important.

It can be very hard to ensure the ethical use of AI if there are so many systems out there that no one knows anything about!
Picture

AI Guardrail 9: Maintain compliance records

It’s one thing to understand the Australian AI Guardrails, and it’s another to implement AI policies and procedures around them to become compliant in your organisation.

Yet it’s another again to keep and maintain compliance records that demonstrate this compliance and allow third parties to assess them. However, that’s what this AI guardrail is requiring businesses to do.
Now, remember that these Australian AI Guardrails as they stand right now are only voluntary, which means they don’t (at this point) create a legal obligation around AI systems and their use.

However, according to the 9th AI guardrail, it says that it’s important to keep these records to prove compliance if you were ever to be assessed. I’m assuming that this is intended to start setting expectations around AI compliance and record keeping, or at least get businesses thinking about it.  

I’m also assuming it won’t be long before some of these become true legal obligations in the near future, so this AI guardrail will need to be of a high priority. 
Picture

AI Guardrail 10: Engage stakeholders

Last but not least, this tenth guardrail outlines how ethics play into AI use in business, by engaging with stakeholders to evaluate their needs and circumstances.
​
It highlights the need to keep stakeholders in mind when it comes to safety, diversity, inclusion and fairness, while reducing the risk or consequences of AI if it was to show signs of bias, inaccessibility and prejudices. 

In other words, this AI guardrail is instructing organisations to connect with stakeholders and proactively monitor the relationships or impact of AI on them to uphold ethical AI principles – which is a good thing. 
Picture

How to use the Australian AI guardrails in an organisation

So, there they all are – all ten, as they stand right now.

Expect them to grow and shift over time, but the goal behind these is to start the process of building a strong foundation for safe, ethical and responsible AI use in business.

​This is just a general guide and is my own interpretation of the Australian AI Guardrails. It doesn’t take into account your specific business circumstances or requirements. For more specific information, visit the Australian Government’s website here.

Let's discuss your AI Ethics needs with Melotti AI

At Melotti AI, we champion responsible AI use in Australian Businesses. Engaging with us is not only straightforward, but a smart investment in your organisation’s future.

Our suite of AI consulting services empowers organisations to forge ahead with confidence and integrity in their AI Implementation.​​

Speak to us for all of your AI Consulting needs, including:
  • AI Ethics Advisory
  • Responsible AI Competency Training
  • AI Diligence Evaluations
  • Human-Centric AI Implementation
  • Ethical AI Framework Development
  • AI Governance Policy Advisory
  • AI Prompt Engineering

Comments are closed.
Picture
© 2024 Melotti AI | All Rights Reserved
​Disclaimer   |   Privacy   |   Terms
Sydney AI Ethics Firm, Australian AI Ethicists, NSW AI Ethics Policy Writers & Cyber Security specialists, helping businesses + professionals adopt and develop AI Technology ethically. 
Address: Level 7/70 King St, Sydney NSW 2000, Australia​
Email: [email protected]
Phone: 1800 M MEDIA - 1800 663 342​​
Acknowledgement of Country
At Melotti AI, we acknowledge the Traditional Custodians of the Australian land on which we gather, meet and work today, and pay our respects to their Elders past and present. We also extend that respect to Aboriginal and Torres Strait Islander peoples here today.
Site powered by Melotti Media
  • AI Services
    • AI Ethics Advisory
    • Responsible AI Competency Training
    • AI Diligence Evaluations
    • Human-Centric AI Implementation
    • Ethical AI Framework Development
    • AI Governance & AI Policy Advisory
    • AI Prompt Engineering
  • About Us
  • Our Methodology
  • Insights
    • Workshops & Events
  • Contact
    • FAQs