What are Detection metrics? Crafting the perfect Detection metrics can feel overwhelming, particularly when you're juggling daily responsibilities. That's why we've put together a collection of examples to spark your inspiration.
Copy these examples into your preferred app, or you can also use Tability to keep yourself accountable.
Find Detection metrics with AI While we have some examples available, it's likely that you'll have specific scenarios that aren't covered here. You can use our free AI metrics generator below to generate your own strategies.
Examples of Detection metrics and KPIs 1. Incident Detection Time The time taken from the moment a threat is detected to the initiation of an incident response
What good looks like for this metric: Typically less than 15 minutes
Ideas to improve this metric Implement automated alerting systems Conduct regular threat hunting exercises Enhance staff training on threat identification Integrate with advanced threat intelligence platforms Utilise machine learning for anomaly detection 2. Containment Time The duration between detection and containment of a threat to minimise its spread and impact
What good looks like for this metric: Ideally under 30 minutes
Ideas to improve this metric Automate endpoint isolation procedures Improve network segmentation Establish predefined incident response playbooks Regularly test response strategies Foster collaboration between IT and security teams 3. False Positive Rate The percentage of alerts that are incorrectly identified as threats
What good looks like for this metric: Should be below 5%
Ideas to improve this metric Refine rule sets and detection algorithms Incorporate feedback loops to learn from past alerts Leverage threat intelligence feeds Enhance contextual information in alerts Invest in more precise detection technologies 4. Number of Lateral Movement Attempts Counts of attempts by threats to move laterally within a network after initial access
What good looks like for this metric: Ideally zero attempts
Ideas to improve this metric Deploy micro-segmentation techniques Monitor for unusual access patterns Strengthen user privilege controls Use lateral movement detection tools Conduct regular security audits and penetration testing 5. Incident Recovery Time The time required to fully restore systems and operations post-incident
What good looks like for this metric: Within 24 hours for minor incidents
Ideas to improve this metric Maintain regular backups and restore procedures Invest in resilient infrastructure Document and streamline recovery processes Facilitate cross-department cooperation Regularly update and test recovery plans
← →
1. Accuracy Proportion of overall correct predictions made by the system
What good looks like for this metric: Typical values range from 85% to 92%
Ideas to improve this metric Regularly update training data with new examples of cyberbullying Employ data augmentation techniques to enhance model robustness Refine algorithms to better differentiate between nuanced bullying and benign interactions Invest in powerful computational resources for training Enhance feature selection to include more relevant variables 2. Precision Proportion of identified bullying incidents that were truly bullying (minimises false positives)
What good looks like for this metric: Typical values range from 80% to 89%
Ideas to improve this metric Implement stricter thresholds for classifying messages as bullying Use ensemble methods to improve precision Incorporate more contextual clues from text data Regularly review and analyse false positive cases Enhance algorithm's sensitivity to language nuances 3. Recall Proportion of actual bullying cases that the system successfully detected (minimises false negatives)
What good looks like for this metric: Typical values range from 86% to 93%
Ideas to improve this metric Increase dataset size with more diverse examples of bullying Utilise semi-supervised learning to leverage unlabelled data Adapt models to recognise emerging slang or code words used in bullying Incorporate real-time updates to improve detection speed Conduct regular system audits to identify and correct blind spots 4. F1-Score Harmonic mean of precision and recall, providing a balanced measure of both
What good looks like for this metric: Typical values range from 83% to 91%
Ideas to improve this metric Focus on improving either precision or recall without sacrificing the other Perform cross-validation to identify optimal model parameters Use advanced NLP techniques for better text understanding Regular user feedback to identify missed detection patterns Continuous deployment for quick implementation of improvements 5. AUC-ROC Measures the ability to distinguish between classes across various thresholds
What good looks like for this metric: Typical values range from 0.89 to 0.95
Ideas to improve this metric Optimise feature selection to improve class separation Apply deep learning methods for better pattern recognition Leverage domain expert input to refine classification criteria Regularly update models to adjust to new trends in digital communication Evaluate model performance using different cut-off points for better discrimination
← →
1. Mean Time to Resolve (MTTR) Average time taken to resolve major incidents, calculated from the time the incident is reported until it is fully resolved
What good looks like for this metric: 2-4 hours
Ideas to improve this metric Implement automated incident response tools Conduct regular training for incident response teams Refine incident categorisation and prioritisation processes Establish a dedicated major incident team Analyse past incidents to identify improvement areas 2. Major Incident Recurrence Rate Percentage of major incidents that recur within a specific timeframe after resolution
What good looks like for this metric: Below 5%
Ideas to improve this metric Conduct thorough root cause analysis Implement permanent fixes rather than temporary solutions Regularly review and update the incident management process Enhance collaboration between incident and problem management teams Utilise knowledge management to share solutions and prevent recurrence 3. Incident Resolution Quality Quality of incident resolution measured through stakeholder feedback and post-incident reviews
What good looks like for this metric: Above 90% positive feedback
Ideas to improve this metric Develop a clear incident resolution checklist Provide additional training on customer service skills Standardise post-incident review processes Gather and act on stakeholder feedback Implement continuous improvement initiatives 4. Stakeholder Communication Effectiveness Effectiveness of communication with stakeholders during major incidents, measured through feedback and surveys
What good looks like for this metric: Above 80% satisfaction
Ideas to improve this metric Establish a communication plan template Utilise multiple communication channels Train staff in effective communication techniques Regularly update stakeholders during incidents Review and refine communication strategies based on feedback 5. Incident Detection Time Time taken to detect incidents from the moment they occur to the moment they are identified
What good looks like for this metric: Within 10 minutes
Ideas to improve this metric Implement advanced monitoring and alerting systems Conduct regular audits of detection tools and processes Improve correlation of events and patterns Train staff to recognise potential incidents quickly Increase the frequency of system health checks
← →
1. Latency Time taken for a transaction or processing a fall event from the input to the final output
What good looks like for this metric: 200-500 milliseconds
Ideas to improve this metric Optimize network bandwidth Utilise more efficient consensus algorithms Reduce data complexity in transactions Incorporate edge computing techniques Enhance processing speeds of nodes 2. Throughput Number of transactions processed within a given period
What good looks like for this metric: 10-100 transactions per second
Ideas to improve this metric Increase the number of nodes Upgrade node hardware Implement parallel processing techniques Optimize transaction validation methods Utilise sharding techniques 3. Security Breach Rate Number of successful breaches attempts per month
What good looks like for this metric: 0-1 breach per year
Ideas to improve this metric Regularly update encryption protocols Conduct routine security audits Implement multi-factor authentication Train staff on security awareness Utilise a robust incident response strategy 4. Scalability Ability to maintain performance as network size or data volume increases
What good looks like for this metric: Linear performance degradation with scale
Ideas to improve this metric Adopt more scalable consensus algorithms Reduce data redundancy Utilise cloud resources for storage Implement load balancing techniques Employ distributed ledger technology 5. Data Integrity Accuracy and consistency of data over its lifecycle
What good looks like for this metric: 99.9% integrity rate
Ideas to improve this metric Regularly verify data with hash functions Set permissions and roles for data access Utilise smart contracts for automatic validation Implement data replication strategies Conduct integrity checks at regular intervals
← →
Tracking your Detection metrics Having a plan is one thing, sticking to it is another.
Don't fall into the set-and-forget trap. It is important to adopt a weekly check-in process to keep your strategy agile – otherwise this is nothing more than a reporting exercise.
A tool like Tability can also help you by combining AI and goal-setting to keep you on track.
More metrics recently published We have more examples to help you below.
Planning resources OKRs are a great way to translate strategies into measurable goals. Here are a list of resources to help you adopt the OKR framework: