AUSSIE WORKERS RISK EXPOSING BUSINESS SECRETS USING AI TOOLS
Blockstars Technology
- 81% of Australian employees admit to sharing confidential information with free AI tools, according to HP and Microsoft survey
 - Free AI platforms can store and reuse proprietary data without organisational oversight, posing significant security risks
 - Australian government introduced Policy for Responsible Use of AI in September 2024, signalling increased scrutiny of AI governance
 - Privacy Act breaches can result in penalties up to $50 million or 30% of adjusted turnover
 - Businesses are urged to develop in-house AI solutions rather than rely on public platforms to maintain data security
 
Australian businesses are facing a new kind of data risk, their own employees. A recent report conducted by HP and Microsoft surveyed Australian business and IT-decision makers revealing new findings that 81% of employees admit to sharing confidential information with free AI tools.
The data was revealed in the 2025 HP Windows 11 SMB (Small and Medium-Sized Businesses) Study, commissioned by HP and conducted by Edelman Data & Intelligence in June 2025, surveyed 500 respondents in Australia, including business and IT decision-makers.
While public platforms such as ChatGPT promise short-term efficiency gains, they also come with potential severe long-term consequences. By providing sensitive business data into external systems, companies risk exposing trade secrets, leaking intellectual property and undermining client trust.
A comment recently published in an article by the Canberra Times “AI has come to use like drinking from a fire hose: it’s come pretty quickly into this space,” said HP Australia and New Zealand managing director Brad Pulford.
“With the extremely fast pace of AI adoption, many organisations are desperate to keep up,” said Kosala Aravinda, CEO of Blockstars Technology an Australian leader in the development of AI. “But efficiency at any cost is a dangerous trade-off. Every confidential document or client record dropped into an AI system could ultimately end up outside your control. The reputational, legal and competitive risks are profound.”
Blockstars noted that while employees are embracing these tools at speed, few understand that free AI platforms are often trained on user inputs. This means proprietary data can be stored, reused or exposed in ways that organisations cannot see or regulate. The pressure for Australian businesses to act fast and implement their own in-house AI to protect themselves is paramount.
Adding to this urgency, the Australian federal government has declared AI adoption a national priority. In September 2024, the Policy for the Responsible Use of AI came into effect, positioning Australia as a leader in safe, accountable AI. The government also launched the National Framework for the Assurance of Artificial Intelligence in Government, underscoring its commitment to consistent, trusted use of AI across the public sector. Blockstars says these moves by the Australian federal government are a clear signal that private organisations must prepare for heightened scrutiny and higher standards of AI governance.
Australia’s Privacy Act 1988 already places strict requirements on the handling of personal and sensitive information. Amendments passed in late 2022 raised penalties for serious breaches to as high as $50 million, or 30% of adjusted turnover, whichever is greater. The Act is now under active review, with a specific focus on AI and data security, signalling that tougher obligations are on the horizon.
“Businesses that fail to take data governance seriously risk being caught out by this wave of reform,” Kosala added. “The cost of AI if deployed recklessly, far outweighs the costs of compliance. A serious breach can cripple a business financially, reputationally and operationally, hence the need for businesses to build their own in-house Ai tools.”
The fallout from a data leak goes far beyond fines. Legal expenses, class actions, breach responses and loss of contracts all carry heavy costs. For many businesses, the reputational damage is even greater: once customers or partners lose trust, it is often impossible to win them back. In competitive industries such as finance, healthcare and professional services, this trust deficit can be fatal.
“Companies spend decades building their brand,” said Kosala. “One mishandled dataset in a free AI tool can undo that work overnight. Once the public sees you as careless with their information, the cost of rebuilding that trust is almost insurmountable.”
“In the near future, this won’t just be a security discussion, it will be an operational one, especially for tier one and tier two companies,” Kosala said. “The strategic question will be simple: do you host your own AI internally, or do you rely on public sources? Those who invest in secure, in-house AI will win the business. Those who don’t will be left behind.”
“Efficiency is not free, it must be balanced with responsibility,” Kosala concluded. “The winners in this new era will be those who adopt AI but do so in a way that keeps their most valuable asset their data, safe.”
Blockstars Technology is calling on Australian organisations to move away from reliance on free public AI tools and instead build private, in-house solutions that maintain full control, compliance and confidentiality.
About us:
Blockstars Technology specialises in secure, enterprise-grade AI platforms that empower organisations to reap the benefits of artificial intelligence without compromising privacy, compliance or control. For more information, visit
Contact details:
Blockstars Technology
Kirstie at [email protected]
+61 404 682 986