How we handle data, security & compliance
What "your data stays yours" actually means in practice — frameworks, infrastructure, and accountability you can verify.
Infrastructure before automation.
Every Tech Horizon Labs engagement starts with a written record of where your data lives, who can touch it, and how it gets out if you need to leave. We map your build to the Australian privacy frameworks that actually apply to your business, deploy on infrastructure you can audit and exit, and document what we did so your auditors and your future self can verify it. This page is the long-form version.
Frameworks we map your build to
Plain-English versions of the standards your auditor or your customer will ask about.
Australian Privacy Act 1988 (APPs)
The 13 Australian Privacy Principles cover collection, use, disclosure, storage, and access of personal information. We map every data flow in your build to the APPs that apply, so when a customer asks "where does my data go?" there is a documented answer.
Notifiable Data Breaches scheme
Under the Privacy Act, certain breaches must be reported to the OAIC and to affected individuals within strict timeframes. We document the breach detection and response procedure as part of every deployment, so the obligation does not catch you flat-footed.
ISO 27001
The international standard for information security management. Even if you are not pursuing certification, mapping your controls to ISO 27001 categories — access control, asset management, incident response — is the cheapest way to know what is missing.
ISO 42001
Released in 2023, the first international management standard specifically for AI systems. It covers AI risk assessment, lifecycle controls, and accountability. Buyers and regulators are starting to ask about it. We design with ISO 42001 categories in mind from day one.
SOC 2 Trust Services Criteria
Used widely by US tech vendors and increasingly required by Australian enterprises buying from smaller suppliers. We know which of the five SOC 2 criteria — Security, Availability, Processing Integrity, Confidentiality, Privacy — apply to your build and document accordingly.
Industry-specific (My Health Records, NDIS, CPS 234)
For allied health, NDIS providers, and financial services we work the relevant regulator into the build: Privacy (Health Information) directives for healthcare, NDIS Quality and Safeguards Commission requirements for disability services, APRA CPS 234 if you ever touch financial services data.
Where your AI runs — and where it doesn't
No hyperscaler lock-in by design.
Tech Horizon Labs deploys on infrastructure you can audit, exit, and own. We don't default to AWS, GCP, or Azure — and that's deliberate. The Australian SMBs we work with don't want their operations tied to a hyperscaler's pricing changes, regional outages, or data sovereignty roulette.
Replit
Client-facing tools, custom dashboards, and rapid iteration. Hosted infrastructure with full version control, exportable to your own server at any time. The project lives in your account, not ours.
IONOS / VentraIP
Australian and German-owned hosting with AU-region data centres for client websites and applications. Predictable pricing. No surprise hyperscaler bills. Australian-owned VentraIP for clients who want full data sovereignty.
Client-owned infrastructure
For clients with existing on-premise servers, NAS devices, or VPS contracts, we deploy directly into the environment you already control. Code and data never leave systems you don't own.
Local AI models
For sensitive workloads, AI runs on the client's own hardware via LM Studio or Ollama. No data leaves the building. See our tool stack →
None of this is anti-hyperscaler — it's pro-buyer-control. If a project genuinely needs AWS or GCP, we'll deploy there. We just don't default to it because the consultant gets a partner discount.
How we apply responsible AI principles
Responsible AI isn't a marketing claim — it's a checklist. On every build we apply principles from Google's Responsible AI for Developers tracks (Fairness & Bias, Privacy & Safety, Interpretability & Transparency) and Model Armor, Google's framework for sanitising prompts and responses against injection attacks and data leakage.
In practice that means: bias checks before any model goes into a hiring or pricing decision; differential-privacy patterns where individual records could be re-identified; prompt-injection defences on any system that takes user input and passes it to an LLM; and explainability documentation for every model that affects a customer or staff member.
These are training tracks we have completed and applied — not formal certifications. The certifications worth having for AI safety work in 2026 don't really exist yet outside ISO 42001, and we'll sit those when they do.
Recognition
- Listed on the Australian Government National AI Centre directory — categorised under AI for Cyber security, Skills and training, Systems integration, Generative AI, Large language models, and Virtual assistant.
- Member, Australian Computer Society
- Member, Noosa Chamber of Commerce
- ABN 80 976 285 425 — registered Australian business
Common questions
Does Tech Horizon Labs have ISO 27001 or SOC 2 certification?
No — those are certifications for the consultancy itself, and at our size the audit cost is disproportionate. What we do is map every client engagement to the relevant standards and document it, so your auditor (or you) can verify the controls are in place. We can support clients seeking their own ISO 27001 or SOC 2 certification by ensuring the systems we build meet those criteria from day one.
Where does our data live when you build for us?
By default, on infrastructure you control or can audit: your existing servers, your IONOS or VentraIP hosting, or a Replit account in your name. If a build genuinely needs cloud AI APIs (Claude, ChatGPT, Gemini), we route through the vendor's enterprise tier where data isn't retained for training, document which prompts are sent and what's redacted, and disclose every external dependency.
What happens to our data if we stop working with Tech Horizon Labs?
You keep everything. Code in your repository. Data on your servers. Documentation in your shared drive. There are no proprietary file formats, no licence keys we hold over you, and no admin accounts we don't transfer. If you end the engagement tomorrow, you can run the systems we built without us.
Are you Google Cloud, AWS, or Azure certified?
No, and intentionally. The Tech Horizon Labs delivery stack is built on Replit, IONOS, VentraIP, client-owned infrastructure, and local AI models — not the hyperscalers. We've completed Google's Responsible AI for Developers tracks and Model Armor training, which we apply on every build, but we don't hold Cloud Engineer certifications because they don't reflect what we actually do.
How do you handle the Privacy Act and the Notifiable Data Breaches scheme?
Every engagement includes a written data flow document that maps personal information collection, storage, use, and disclosure to the relevant Australian Privacy Principles. Where the build creates new data (AI summaries, generated content), the data flow is updated. Breach detection is built in via logging and monitoring, with a documented response procedure that meets the OAIC's notification timelines.
Do you sign NDAs and data processing agreements?
Yes. We sign mutual NDAs as a standard part of pre-discovery, and a data processing agreement covering Australian Privacy Principles compliance is part of every signed engagement. If your business has its own template, we'll review and sign yours.
Want this kind of attention to your data?
Bring your current setup, your concerns, and any incidents that made you start asking these questions. We'll tell you what we see and what we'd do about it.
Book a free pre-discovery call →
Queensland HQ · Deployments across Australia · Remote-first delivery
This site is protected by Cloudflare's enterprise-grade network security.