SHIPPED
Cisco AI Assistant
Helping network administrators trust AI systems to manage complex security workflows in large-scale enterprises.


OVERVIEW
Cisco's AI Assistant empowers enterprise network administrators to craft and deploy security policies in a fraction of the time, slashing manual effort and wiping out costly human errors.
Creating error-free security policies is critical for Cisco’s enterprise users—universities, hospitals, financial institutions, government agencies, and pharmaceutical companies—all aiming to protect sensitive data and assets. Given their complex security challenges, they need a trustworthy AI assistant to aid in policy creation and deployment.
Timeline
3 months
Team members
1 UX Designer (self); 2 Developers; and a Data Analyst
My contribution
Led UX research—both primary and secondary—to develop a design framework that enhances user trust in Cisco’s AI Assistant. Crafted the complete UI and prototype, owning the full design process from concept to delivery.
THE PROBLEM
“AI is good but I wouldn't trust it to do the auto-deployment of a rule. There’s too much at stake." - Nik, a network administrator using Cisco’s firewall management software.
AI allows Cisco to help their users with routine, mundane tasks and enables them to focus on achieving their security goals. However, a successful AI experience necessitates users being able to trust the system and the system ability to fulfill the security outcomes of their organization.
TARGET USERS
Network administrators are responsible to create, establish and execute network security rules and protocols for a company.
I studied the roles, responsibilities, and challenges of network administrators when they create and establish network security rules. This helped me in developing an accurate understanding of the target users.
Responsibilities
Network admins are responsible for policy management, which includes creating, modifying, and overseeing security policies to regulate access to network resources.
Challenges
A key challenge is their hesitation to trust AI for security tasks due to concerns about transparency, reliability, and error-free execution.
KEY LEARNINGS
Network administrators need AI systems to be transparent and reliable before they can trust them for critical security tasks—lack of explainability is a major barrier to adoption.
SECONDARY RESEARCH
Trust in AI is hard to build, as it means different things to different people. I turned to secondary research to understand how trust works and how design can make AI feel more reliable.
A keyword search revealed a key insight: people are more likely to trust something when they understand its background. This insight, applied to AI, means using Explainable AI (XAI) to keep the users informed. My research uncovered 3 key stages for building user trust in AI systems-
PRESENTING XAI IN THE RIGHT FORMAT (3/3)
“Consider how you will present the XAI output to your intended audience. For eg. use a dashboard, report, or dialogue system to convey the output.”
("How to use Explainable AI in your workflow," 2023)
KEY LEARNINGS
Building user trust in AI requires clear explanations, choosing the right method for the target user, and presenting it in a format they can easily understand.
COMPETITIVE ANALYSIS
After exploring Explainable AI and how to apply it, I researched current AI-based cybersecurity tools to see how they work and earn users’ trust.

KEY LEARNINGS
All major AI-security tools spot threats well but seldom explain their decisions clearly, leaving a trust gap for admins.
THE FRAMEWORK
I did some more digging into design considerations and practices that could drastically improve the trust of network administrators in Cisco's AI Assistant for network security.
Further research focused on best design practices to improve user trust in AI revealed a few more interesting insights-
User Personalization
Personalizing the AI assistant makes users feel heard and aligns its actions with both individual and organizational needs.
Human Embodiment
Giving AI a natural human-like voice significantly boosts user trust. (Lexow, M. (2021, January 27))
Human-in-the-loop (HILT)
Keeping the network admins in the loop ensures better control on policy creation and execution. (Lexow, M. (2021, January 27)
Detailed explanations
Detailed explanations of AI-generated policies give users the backstory of AI's actions, boosting user trust.

FINAL SOLUTION
An enhanced version of Cisco’s AI Assistant that explains every action, keeps humans in the loop for approvals, and includes key features designed to build user trust.
The high-fidelity designs focused on building user trust through tested mechanisms such as Explainable AI (XAI), human embodiment, Human-in-the-loop (HILT), and user personalization.
USER PERSONALIZATION
Network administrators can choose to customize the AI assistant as per their preferences. Since some users might want more control than others, providing all necessary customization options caters to all kinds of users.

PROMPTING THE AI ASSISTANT
This screen is only for establishing the context for further ones. Here, the network administrators can prompt the AI to either create an Internet Access Rule or a Private Access Rule, which are the two types of security policy creation.

STEP-WISE TRANSPARENCY & HUMAN-IN-THE-LOOP
After the prompt to create a security policy is sent through, Cisco's AI assistant provides transparent view on its actions, and asks for user input wherever necessary, to make sure the security policy is being generated for the correct entity.

OLD VERSUS NEW SECURITY RULE
In order to make things even more clear, the Cisco AI assistant gives a comparison between the existing security rule and the new one, giving the users an option to verify the change and also spot any duplicate rules in the process.

TRANSPARENCY BEFORE DEPLOYMENT
When the AI assistant has identified all the parameters and values of the new security rule/policy, it gives the entire information to the user, for their review and acknowledgment.

DETAILED EXPLANATIONS USING XAI
The user is also given an option to view the entire explanation for a specific security policy creation. This helps in making them aware of the backstory of the different parameters and values of the security policy in question.

FAIL-SAFE DEPLOYMENT
To prevent mistakes before the deployment of a security policy, Cisco's AI assistant prompts the network administrator to review and accept with their credentials, building the trust that the AI assistant is taking actions under expert supervision.

LEARNINGS
This was a challenging project in its context and construct, where there was a lot to decipher and understand about user trust, abstract and subjective in its nature.
Trust must be designed, not assumed
I learned that trust in AI doesn’t come naturally, it needs to be intentionally built through thoughtful UX decisions like clear explanations, consistent feedback, and user involvement.
Users want control, not just automation
Keeping users in the loop through approval checkpoints gave me insight into how crucial it is to balance AI autonomy with human oversight.
Clarity beats complexity
Even in an enterprise setting, simplifying language and visuals goes a long way in building confidence in advanced systems like AI.
Explainable AI (XAI) must be human-first
It’s not enough for AI to be technically sound, users need simple, contextual explanations that fit their mental model and role.
Other Projects