Human Oversight for AI Agents
This solution accelerator provides a pattern for integrating human approval steps into autonomous AI agent workflows using Azure Logic Apps and a Python decorator. As AI systems become more powerful and autonomous, implementing human oversight mechanisms becomes critical for safety and compliance.
Table of Contents
- Why Human Oversight for AI Systems?
- How It Works
- Code Example
- Included Demos
- Deploying the Solution
- Approval Workflow Experience
- Reporting
- Customizing the Solution
- Security Considerations
- Contributing
Why Human Oversight for AI Systems?
Autonomous AI agent systems can perform complex tasks with minimal supervision, but certain critical actions should require human approval before execution, such as:
- Deleting or modifying important resources
- Taking actions that impact user data or privacy
- Financial transactions or high-risk operations
- Actions with significant business impact or security implications
This solution provides a flexible, auditable human approval workflow that can be easily integrated into existing AI agent code with minimal changes. The design is orchestrator-agnostic, allowing it to work with virtually any AI agent framework or orchestration system.
How It Works
- Your AI agent uses a tool or function that has been annotated with the
@approval_gatedecorator - The decorator intercepts the action and sends the details to an Azure Logic App
- The Logic App emails designated approvers with action details and approve/reject buttons
- The Python code waits for a response (with configurable timeout)
- If approved, the original tool/function executes; if rejected or timed out, a default value is returned
Code Example
Here's a real-world example from the included sample application:
from human_oversight import approval_gate # Regular function - no approval required def list_users(location_filter: str = None): """Lists users, optionally filtering by location (domain part of email).""" print(f"Executing list_users(location_filter='{location_filter}')...") if not location_filter: return json.dumps(list(MOCK_USERS.values())) else: filtered_users = [ user for user in MOCK_USERS.values() if user["email"].endswith(f"@{location_filter}") ] return json.dumps(filtered_users) # Critical function with approval gate @approval_gate( agent_name=AGENT_NAME, action_description="Delete User Account", approver_emails=APPROVERS, refusal_return_value="DENIED: User deletion was not approved.", ) def delete_user(user_id: str): """Deletes a user account. Requires human approval via the Approval Gate.""" print(f"Executing delete_user(user_id='{user_id}')...") if user_id in MOCK_USERS: deleted_user = MOCK_USERS.pop(user_id) print(f"Successfully deleted user: {deleted_user['name']} (ID: {user_id})") return json.dumps({ "status": "success", "message": f"User {user_id} deleted.", "deleted_user": deleted_user }) else: print(f"User ID '{user_id}' not found.") return json.dumps({ "status": "error", "message": f"User {user_id} not found." })
Included Demos
The project includes two demo applications:
1. OpenAI Client Demo (app/openai_client_demo.py)
This demo shows an integration with the Azure OpenAI client. It demonstrates:
- Direct tool usage with OpenAI function calling
- Human oversight for critical operations (user deletion)
- Simple prompt-based interaction
# Run the OpenAI Client demo cd app python openai_client_demo.py
2. Semantic Kernel Multi-Agent Demo (app/sk_demo.py)
This more advanced demo showcases integration with Microsoft's Semantic Kernel framework in a multi-agent system:
- A collaborative system with three specialized agents (Researcher, Critic, Publisher)
- GitHub code search capabilities
- Human oversight for publishing operation
- Complex agent-to-agent interactions
# Run the Semantic Kernel demo cd app python sk_demo.py
Deploying the Solution
Step 1: Prerequisites
- Azure Subscription
- Azure CLI installed and logged in
- Office 365 account with permissions to send emails
Step 2: Clone the Repository and Set Up Environment
git clone https://github.com/microsoft/agents-humanoversight.git
cd agents-humanoversightStep 3: Deploy Azure Resources using Bicep
# Login to Azure az login # Set your subscription az account set --subscription "<Your-Subscription-ID>" # Create resource group az group create --name "rg-human-oversight" --location "eastus" # Deploy resources using Bicep cd deployment az deployment group create \ --resource-group "rg-human-oversight" \ --template-file main.bicep
The deployment will output several values, including:
logicAppUrl- Required for the Python applicationstorageAccountName- For storing approval logsapprovalsTableName- Table name for approval logs
Step 4: Authorize Office 365 Connection
This critical step requires manual authorization in the Azure Portal:
- Go to the Azure Portal
- Navigate to your resource group ("rg-human-oversight")
- Find and click on the API Connection resource named "office365"
- In the left menu, click on "Edit API connection"
- Click the "Authorize" button
- Sign in with your Office 365 account when prompted
- After successful authorization, the connection status should show "Connected"
- Click "Save" to save the connection
Step 5: Configure and Run the Python Application
-
Navigate to the application directory:
-
Create a
.envfile with the required configuration:HO_LOGIC_APP_URL=<logicAppUrl-from-deployment-output> APPROVER_EMAILS=approver1@example.com,approver2@example.com # Optional for OpenAI integration AZURE_OPENAI_ENDPOINT=<your-endpoint> AZURE_OPENAI_API_KEY=<your-api-key> AZURE_OPENAI_DEPLOYMENT_NAME=<your-deployment> -
Install dependencies:
pip install -r requirements.txt
-
Run the sample application:
Approval Workflow Experience
When a protected function is called:
- The approver(s) will receive an email with:
- Agent name and action description
- Parameters being passed to the function
- Approve and Reject buttons
-
The Python application will wait for a response for up to 2 minutes (configurable)
-
Based on the response:
- If approved: The function executes as normal
- If rejected: The function is not executed, and the configured refusal value is returned
- If timeout: The function is not executed, and the configured refusal value is returned
Customizing the Solution
Customizing the Email Template
To modify the email format, edit the Logic App definition in deployment/logicapp.bicep. Look for the Send_approval_email action and update the Subject and Body fields.
Adding Additional Approvers
You can specify multiple approvers as a comma-separated list in the APPROVERS_EMAILS environment variable, or directly in your code:
@approval_gate( agent_name="CriticalAgent", action_description="Dangerous Action", approver_emails=["primary@example.com", "backup@example.com", "security@example.com"], refusal_return_value={"status": "denied"} )
Reporting
A Power BI dashboard is included to visualize approval data and monitor agent activity.
You can open docs/approvaldashboard.pbix in Power BI Desktop.
Note: You must update the data source connection details using Transform Data > Advanced Editor to match your storage account and table configuration.
Security Considerations
- The Logic App URL should be treated as a secret
- Use Key Vault in production to store sensitive configuration
- Consider implementing IP restrictions on the Logic App trigger
- Add Azure AD authentication for additional security
- Monitor and audit approval logs regularly
Contributors
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.





