top of page
Apartment buildings seen from below in Ho Chi Minh City, Vietnam

AI Safety at HIO

HIO is committed to preserving human expertise in the age of AI.  

Building safe and responsible artificial intelligence means addressing the risks of AI head on.  At HIO, we perform ethical audits of our models in order to minimize impact from all the LLM risk categories posed by Weidinger et al., 2022.  This protects against misinformation, discrimination, and malicious use of HIO software, but it doesn't stop there.  HIO's fine-tuning procedure ensure accuracy and transparency in all HIO models with engineering and strategic adjustments that make your model directly cite your proprietary documents and minimize the potential for AI hallucinations.

01

AI Guardrailing

HIO models are vetted by our Product and AI Ethics teams to firmly set what is in- and out-of-scope for a given model. This helps reduce hallucinations and ensures that your model only answers the questions you want it to.  Our model citations bring transparency and accountability in your deployed AI system.  

02

Humans in the Loop

HIO stands for Humans in the Loop.  That means when a query becomes too complicated or high-risk, we loop in a human with a specific notificaiton, or by directing the user to seek out an answer using your existing pipelines.  

03

Custom Escalation Pathways

HIO can escalate concerns to a variety of parties: tech questions to your IT team, policy questions to your operations team, finance questions to your accounting team, and more.  Have a third-party that always confusing your users? We can direct escalations to them as well, freeing up your staff to focus on their niche, all part of HIO's commitment to preserving unique expertise. 

bottom of page