shield DOONKLY AI Governance

Implementation of the NIST AI Risk Management Framework

flag Our Mission

DOONKLY is an advanced AI assistant for Chrome that transforms your browser into an intelligent workspace. As indicated in the NIST AI Risk Management Framework (AI RMF 1.0), responsible management of artificial intelligence risks is fundamental to building reliable and trustworthy systems.

We adopt a structured approach based on the four core functions of the NIST AI RMF: GOVERN, MAP, MEASURE, and MANAGE, ensuring transparency, accountability, and safety in AI usage.

bar_chart The Four NIST AI RMF Functions in DOONKLY

verified_user GOVERN - Governance and Policies

The GOVERN function establishes organizational culture and processes for AI risk management, ensuring alignment with ethical principles and corporate values.

GOVERN 1: Transparent Policies and Processes

  • Legal Compliance: We understand and respect GDPR, LGPD, and other privacy and data protection regulations
  • Trustworthy Characteristics: We integrate validity, safety, transparency, and privacy at every stage of the system lifecycle
  • Risk Management: Determination of risk management activity level based on organizational risk tolerance
  • Continuous Monitoring: Periodic review of risk management processes with clearly defined roles and responsibilities
  • AI System Inventory: Mechanisms to inventory AI systems according to risk priorities

GOVERN 2: Accountability Structures

  • Defined Roles: Clear documentation of roles and responsibilities for mapping, measuring, and managing AI risks
  • Training: Continuous staff training on AI risk management
  • Accountable Leadership: Executive leadership takes responsibility for AI system risk decisions

GOVERN 3: Diversity and Inclusion

  • Diverse Teams: Decision-making informed by teams with diversity in backgrounds, disciplines, and experiences
  • Human-AI Configurations: Policies to define roles and responsibilities in human-AI interactions

GOVERN 5: Engagement with AI Actors

  • External Feedback: Collection and integration of feedback from users and external stakeholders
  • Societal Impacts: Consideration of individual and social impacts of AI systems

GOVERN 6: Third-Party Management

  • Third-Party Risks: Policies to manage risks associated with third-party software, models, and data (OpenAI)
  • Intellectual Property: Respect for intellectual property rights and licenses
  • Incident Management: Contingency processes to manage failures in high-risk third-party AI systems
  • Zero Data Retention: OpenAI applies zero data retention policy - API data is not stored or used for model training

map MAP - Context and Risk Mapping

The MAP function establishes the context to frame risks related to an AI system, identifying purposes, benefits, costs, and potential impacts.

MAP 1: Context Established

  • Intended Purposes: Documentation of intended uses and potential benefits of DOONKLY (screenshot analysis, document processing, voice recording, content generation)
  • Impacts: Assessment of positive and negative impacts on individuals, organizations, and society
  • Usage Context: Understanding user expectations, legal norms, and deployment contexts
  • Organizational Mission: Alignment with productivity and intelligent assistance objectives
  • Risk Tolerance: Definition and documentation of organizational risk tolerances

MAP 2: AI System Categorization

  • Specific Tasks: Definition of tasks (classification, generation, multimodal analysis)
  • System Limitations: Documentation of knowledge limitations and necessary human oversight
  • Scientific Integrity: TEVV considerations (Test, Evaluation, Verification, Validation)

MAP 3: Capabilities and Benefits

  • Benefits: Increased productivity, multimodal analysis, intelligent document processing
  • Costs: Assessment of monetary and non-monetary costs, including system errors
  • Scope: Clear definition of application scope (Chrome extension, browser assistant)
  • Operator Competence: Definition of processes for user competence

MAP 4: Third-Party Component Risks

  • Third-Party Technologies: Risk mapping for OpenAI models (GPT, DALL-E)
  • Internal Controls: Identification of risk controls for third-party components
  • Data Protection: OpenAI zero data retention ensures user data is not stored or used for training

MAP 5: Stakeholder Impacts

  • Impact Assessment: Probability and magnitude of beneficial and harmful impacts
  • Engagement: Practices for regular engagement with users and feedback on impacts

analytics MEASURE - Measurement and Evaluation

The MEASURE function employs quantitative and qualitative tools to analyze, assess, and monitor AI risks and trustworthiness characteristics.

MEASURE 1: Metrics and Methods

  • Metric Selection: Identification of metrics to measure significant AI risks
  • Regular Assessment: Periodic assessment of metric appropriateness and control effectiveness
  • Independent Experts: Involvement of independent assessors and domain experts

MEASURE 2: Trustworthy Characteristics Assessment

  • Test Sets: Documentation of test sets, metrics, and TEVV tools
  • Validity and Reliability: Demonstration that the system is valid, reliable, and generalizable
  • Safety: Safety risk assessment, with residual risks within tolerance
  • Security and Resilience: Assessment and documentation of cybersecurity and resilience
  • Transparency: Examination of risks related to transparency and accountability
  • Explainability: Explanation and validation of AI model, output interpretation in context
  • Privacy: Examination and documentation of privacy risks
  • Fairness: Fairness and bias assessment, with documented results
  • Sustainability: Assessment of environmental impact of AI model training and management

MEASURE 3: Risk Tracking Over Time

  • Continuous Monitoring: Identification and tracking of existing, unexpected, and emerging risks
  • User Feedback: Feedback processes for end users to report issues

settings MANAGE - Management and Mitigation

The MANAGE function allocates resources for mapped and measured risks, implementing response plans, recovery, and incident communication.

MANAGE 1: Prioritization and Response

  • Go/No-Go Decision: Determination if the AI system achieves intended purposes
  • Prioritization: Treatment of documented risks based on impact, probability, and resources
  • Response Plans: Development of responses for high-priority risks (mitigation, transfer, avoidance, acceptance)
  • Residual Risks: Documentation of negative residual risks for downstream buyers and end users

MANAGE 2: Benefit Maximization Strategies

  • Resource Allocation: Resources to manage AI risks, considering non-AI alternatives
  • Sustainable Value: Mechanisms to sustain the value of deployed AI systems
  • Unknown Risk Response: Procedures to respond to previously unknown risks
  • Deactivation: Mechanisms to deactivate AI systems with inconsistent performance

MANAGE 3: Third-Party Management

  • Continuous Monitoring: Regular monitoring of risks and benefits from third-party resources
  • Pre-trained Models: Monitoring of OpenAI pre-trained models (GPT, DALL-E) as part of regular maintenance
  • Data Retention Policy: Verification that OpenAI maintains zero data retention for API requests

MANAGE 4: Communication Plans

  • Post-Deployment Monitoring: Plans with mechanisms for user input, appeals, overrides, incident response
  • Continuous Improvement: Measurable activities for continuous improvements with stakeholder engagement
  • Incident Communication: Communication of incidents and errors to relevant AI actors, including impacted communities

lock Trustworthiness Characteristics in DOONKLY

According to the NIST AI RMF, trustworthy AI systems must balance the following characteristics. Here's how DOONKLY implements them:

done_all Valid & Reliable

Use of industry-leading AI models (GPT, Claude, Gemini) with validated performance and demonstrated robustness

shield Safe

Responsible design, clear usage information, risk documentation, and human oversight when necessary

security Secure & Resilient

Zero server-side data storage, endpoint protection, failure resilience with fallback mechanisms

visibility Accountable & Transparent

Clear documentation of models used, data provenance, transparent decision-making processes

psychology Explainable & Interpretable

Explanations of AI functionalities, contextual output interpretation, limitation documentation

lock Privacy-Enhanced

No server-side storage, local processing when possible, GDPR/LGPD compliance, data minimization

balance Fair - Bias Managed

Monitoring of bias in third-party models, diversity in testing, documentation of limitations and potential biases

lock Privacy & Data Protection - Our Commitment

Zero Server-Side Storage: As stated on the DOONKLY website, we do not perform server-side data storage. All processed data remains in the user's browser.

Regulatory Compliance: We respect GDPR (Art. 22 - automated decisions), LGPD (Art. 20 - automated decision reviews), and other international privacy regulations.

Data Minimization: We process only the data strictly necessary to provide the service requested by the user.

Third-Party Transparency: When we use OpenAI APIs, we inform users and ensure data is transmitted securely. OpenAI applies a zero data retention policy - API requests are not stored or used for model training.

checklist Comparison: Traditional Approach vs DOONKLY Approach

Aspect Traditional Approach DOONKLY Approach (NIST AI RMF)
Governance Ad-hoc policies, unclear responsibilities Structured GOVERN framework with defined roles, continuous training, periodic review
Risk Assessment Sporadic assessment, lack of context Complete MAP function with context mapping, stakeholders, societal impacts
Metrics Generic metrics, irregular testing MEASURE function with trustworthiness metrics (safety, security, fairness, privacy)
Incident Management Reactive response, poor documentation MANAGE function with proactive plans, transparent communication, continuous improvement
Transparency Black-box models, limited explanations Complete documentation, explainability, contextual interpretability
Privacy Centralized storage, limited user control Zero server-side storage, local processing, GDPR/LGPD compliance
Third-Party Unmanaged dependencies, unknown risks GOVERN 6 and MANAGE 3: continuous monitoring, contingency plans, DPAs and SCCs