The age of AI is here – no mistaking it.
Given how revolutionary and powerful this technology is, we’ve all (kind of) known that there is a pressing need for a little guidance around how we implement it ethically into our organisations. As a result, I’ve been advocating for businesses to have bespoke AI policies around responsible commercial AI use for a few years now. Wait, the Australian Government stepped up to the plate!
To my surprise, the Australian Government released its own AI guidelines in 2024, which they called “the 10 guardrails”. I’m not surprised that they did it; I’m more shocked at how fast this came out.
Typically speaking, laws and regulations take a little time to come out from official bodies – and right now, this is only classed as “voluntary”, but it’s definitely a great step in the right direction. So, what should businesses do about the 10 AI guardrails?
Adopting at least a few or all of these will help your organisation in three ways:
Let me take you through some of the basics to get you up to speed and show you how to be compliant with these 10 guardrails surrounding AI use in business. They’re “volunteer” AI guardrails! Why does my business need to care?
Good point.
But to that, I point you to theAustralian Government’s Department of Industry, Science and Resources page, which says:
See the second sentence?
In other words, it can be implied that stricter legal regulations (both in Australia and internationally) that will be compulsory are incoming. You don’t want to be caught unaware, ESPECIALLY as so many businesses today are adopting AI and integrating it now. I can imagine it will be much harder to “untangle” AI from your operations to make it compliant down the track, so it’s better to be proactive now and adopt AI in the right way to save you a lot of future grief. Right. So, now that the “WHY” is out of the way, let’s discuss the “What are the 10 AI guardrails” and “How do I implement the 10 AI guardrails into my business” part. Contents: AI Guardrail 1: Accountability process
This first is about establishing, implementing and publishing an internal accountability process around AI’s use in your organisation.
While there is no formal or legal set of regulatory compliance rules in Australia yet, I suggest making your AI policies align with your business values and/or code of ethics – that’s a great start.
So, Guardrail One is all about ensuring you have the Standard Operating Procedures, as well as the capabilities required to oversee the safe, ethical and responsible use of AI every day.
To adopt this, it’s important to put together AI policies and procedures around factors like:
This is a GREAT start as it lays the groundwork for the following nine remaining AI guardrails.
AI Guardrail 2: Risk Management
While setting up accountability is good, minimising potential risk is just as important.
For instance, when you’re integrating AI into your operations:
These are the factors that must be identified and then addressed. From here, carrying out regular risk assessments to uphold your risk mitigation is important, as AI use can evolve and change over time. AI Guardrail 3: Protecting your AI systems
Just as we’re trying to protect the stakeholders around your AI tech, you also need to protect your AI itself.
So, as a business, it’s important to be vigilant about protecting your own tech’s integrity.
If your AI gets affected, the repercussions could be devastating to your business and beyond. So, have a process in place to keep your AI systems insulated, protected and overseen. AI Guardrail 4: Evaluate AI performance
With the above Guardrails in place, it’s now time to run your platforms and keep a close eye on how AI actually works, what it’s doing and how it acts, in line with your plans and expectations - all so you can act accordingly.
This guardrail specifically says, “before deployment”, which is a smart idea obviously. But it shouldn’t stop there. As businesses, it’s important that we continually evaluate AI’s performance to keep a close eye on how it functions.
That’s particularly true when we implement AI into niche circumstances – we don’t fully know how it will act now and into the future, so it’s better to be proactive.
This shouldn’t come as a surprise, as it’s standard business practise. Just have your AI performance expectations clearly defined, as well as the level of variability you’re happy to accept before you take corrective action.
AI Guardrail 5: Human intervention
Building on the above, it’s absolutely essential that there’s always a way for a responsible human to be able to intercept, intervene and take control away from AI – at all times.
In the rapid adoption of AI into business, some organisations may not realise how much they’re giving over complete control to their AI systems. So, if things were to go wrong, they’re not sure in the moment how to actually take back control.
It may seem like an innocent procedure, like a simple AI algorithm combining data from different channels together, however if it starts mixing these data points up due to an accidental code issue or a glitch, it could begin exposing data to the public, putting privacy at risk. You don’t want to be the company that can’t address this quickly.
Human oversight at all stages and across the whole system is essential so that control can be taken back if something unexpected was to ever occur.
AI Guardrail 6: AI use transparency
There’s one great way to give more people peace of mind around your use of AI: by telling them!
You don’t have to share trade secrets.
For instance, telling people they’re speaking to a bot can give them realistic expectations and mitigate the frustration around feeling deceived. Another example is telling people that they’re reading AI-generatedcontent so that can make their own decisions and perform their own due diligence.
(By the way, NONE of this article is written by AI. Purely me!). How you disclose AI use is up to you – I would suggest in a way that people can see and acknowledge. AI Guardrail 7: External feedback
As with anything in business, different parties all get affected differently.
So, while management may have the majority of the say around AI use, it’s important to establish a way that other stakeholders can voice their opinions or notify you of their challenge to your use of AI.
For example, your staff may notice something that AI is doing that you don’t. Your customers may spot glitches or outcomes that you wouldn’t want but just don’t see.
Having an effective AI feedback loop will give you a more holistic view of how things are working. AI Guardrail 8: AI component clarity
This one is more for the builders of AI systems, but it can still apply to every organisation too, in specific cases.
With so many organisations building their own AI platforms or at least being involved in the AI supply chain, it’s important that all of the organisations in the chain are transparent about how each system or component works.
This is quite important because it’s not just AI algorithms that are evolving: it’s the platforms that are getting built too. With more organisations crafting their own AI platforms within their own niches, ensuring there is greater transparency around all of those systems is important.
It can be very hard to ensure the ethical use of AI if there are so many systems out there that no one knows anything about! AI Guardrail 9: Maintain compliance records
It’s one thing to understand the Australian AI Guardrails, and it’s another to implement AI policies and procedures around them to become compliant in your organisation.
Now, remember that these Australian AI Guardrails as they stand right now are only voluntary, which means they don’t (at this point) create a legal obligation around AI systems and their use.
However, according to the 9th AI guardrail, it says that it’s important to keep these records to prove compliance if you were ever to be assessed. I’m assuming that this is intended to start setting expectations around AI compliance and record keeping, or at least get businesses thinking about it. I’m also assuming it won’t be long before some of these become true legal obligations in the near future, so this AI guardrail will need to be of a high priority. AI Guardrail 10: Engage stakeholders
Last but not least, this tenth guardrail outlines how ethics play into AI use in business, by engaging with stakeholders to evaluate their needs and circumstances.
It highlights the need to keep stakeholders in mind when it comes to safety, diversity, inclusion and fairness, while reducing the risk or consequences of AI if it was to show signs of bias, inaccessibility and prejudices.
How to use the Australian AI guardrails in an organisation
So, there they all are – all ten, as they stand right now.
Expect them to grow and shift over time, but the goal behind these is to start the process of building a strong foundation for safe, ethical and responsible AI use in business. This is just a general guide and is my own interpretation of the Australian AI Guardrails. It doesn’t take into account your specific business circumstances or requirements. For more specific information, visit the Australian Government’s website here. Let's discuss your AI Ethics needs with Melotti AI
At Melotti AI, we champion responsible AI use in Australian Businesses. Engaging with us is not only straightforward, but a smart investment in your organisation’s future.
Our suite of AI consulting services empowers organisations to forge ahead with confidence and integrity in their AI Implementation. Speak to us for all of your AI Consulting needs, including: Comments are closed.
|
|