Psybersecurity Human Factors of Cyber Defence

Psybersecurity: Human Factors of Cyber Defence is a clarion call to action in the face of a stark reality: over 90% of cyber attacks exploit human vulnerabilities, as highlighted by the 2022 Global Risks Report from the World Economic Forum.

Bibliographic Details
Main Author: Guidetti, Oliver (-)
Other Authors: Ahmed, Mohiuddin, Speelman, Craig
Format: eBook
Language:Inglés
Published: Boca Raton : Taylor & Francis Group 2024.
Edition:1st ed
Subjects:
See on Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009869128506719
Table of Contents:
  • Cover
  • Half Title
  • Title
  • Copyright
  • Dedication
  • Contents
  • Preface
  • About the editors
  • List of contributors
  • 1 Integrating human factors and systemic resilience: an interdisciplinary approach to cybersecurity in critical infrastructures and utilities
  • 1.1 Introduction: the convergence of disciplines in cybersecurity
  • 1.1.1 Statement of the problem
  • 1.1.2 Significance of this study
  • 1.1.3 Research objectives
  • 1.1.4 Chapter overview
  • 1.2 Cyber-physical security and critical infrastructure: the indivisible duo
  • 1.2.1 Definition of key terms
  • 1.2.2 Evolution and importance of cyber-physical systems
  • 1.2.3 Specific challenges in cyber-physical security
  • 1.3 Systemic vulnerabilities and cybersecurity threats
  • 1.3.1 Systemic vulnerabilities: an overview
  • 1.3.2 The interface between systemic vulnerabilities and cybersecurity
  • 1.3.3 Cyberthreats leveraging systemic weaknesses
  • 1.4 The human element in cybersecurity: the weakest link
  • 1.4.1 Human factors in cybersecurity: an overview
  • 1.4.2 Human errors: a breach in cyber defence
  • 1.4.3 The psychology behind social engineering
  • 1.4.4 Insider threats: a hidden menace
  • 1.5 Role of culture in cybersecurity: shaping the human factor
  • 1.5.1 The significance of cybersecurity culture
  • 1.5.2 Creating a cybersecurity culture in utilities
  • 1.5.3 Shift from information security to cybersecurity
  • 1.5.4 Impact of culture on cybersecurity performance
  • 1.6 Integrated cybersecurity frameworks and strategies: a holistic approach
  • 1.6.1 The need for integration in cybersecurity
  • 1.6.2 Principles of integrated cybersecurity frameworks
  • 1.6.3 Risk analyses and hazard management in cybersecurity
  • 1.6.4 Implementing an integrated cybersecurity strategy
  • 1.7 Systemic resilience in cybersecurity: bouncing back from attacks.
  • 1.7.1 Systemic resilience: an overview
  • 1.7.2 Role of systemic resilience in cybersecurity
  • 1.7.3 Building systemic resilience in critical infrastructures
  • 1.8 The future of cybersecurity and AI's role: the next frontier
  • 1.8.1 Emerging trends in cybersecurity
  • 1.8.2 AI and cybersecurity: a new paradigm
  • 1.8.3 The future of AI in mitigating human-related risks
  • 1.9 Conclusions and recommendations: charting the path forward
  • 1.9.1 Summary of key findings
  • 1.9.2 Recommendations for practice
  • 1.9.3 Avenues for future research
  • References
  • 2 Analysing cyber-physical attacks: the human operator challenge in mining
  • 2.1 Introduction
  • 2.2 Mining process plants (MPP)
  • 2.2.1 Mining
  • 2.2.2 Overview of SCADA systems in MPP
  • 2.3 Cyber-physical systems and attacks
  • 2.3.1 Cyber-physical attacks (CPA)
  • 2.4 Humans and human operators in mining
  • 2.4.1 Human operations in mining
  • 2.4.2 Human operator
  • 2.5 Human psychological challenges in mining
  • 2.5.1 Impact of cyberattacks on mental health
  • 2.5.2 Human operator mental health
  • 2.6 Autonomous cyber-physical security (CBPS)
  • 2.7 Autonomous operator for cyber-physical systems
  • 2.7.1 A theoretical CPS model for an autonomous operator
  • 2.8 Conclusions
  • References
  • 3 Building cognitive resilience for enhanced cyber governance
  • 3.1 Introduction
  • 3.2 Human psychology and cognitive abilities
  • 3.3 Digital world and cybersecurity governance
  • 3.4 Social engineering and human psychology
  • 3.4.1 Example 1: malicious actors use whaling attack to defraud bank of US75.8 million
  • 3.4.2 Example 2: malicious actors use AI voice impersonation to defraud organisation of US243,000
  • 3.4.3 Example 3: malicious actors exploit human nature to defraud australians of more than AU7.2 million
  • 3.5 Public trust and citizen engagement.
  • 3.5.1 Example 4: authentication gaps: losing user trust to malicious exploits
  • 3.5.2 Example 5: credential stuffing, blaming users for data theft
  • 3.6 Cyber and cognitive resilience building
  • 3.6.1 Humans need to know what they are doing to manage their cybersecurity
  • 3.6.2 Humans need to understand what can go wrong
  • 3.6.3 Humans need to be willing to learn from their (and others) experiences
  • 3.6.4 Thinking under pressure: navigating the digital world
  • 3.6.5 Humans need to adapt to the fast-moving digitisation of the world
  • 3.7 Conclusion
  • 3.8 Acknowledgement
  • References
  • 4 Cybersecurity in australian higher education curricula: the SFIA framework
  • 4.1 Introduction
  • 4.2 Curriculum design
  • 4.3 Frameworks
  • 4.4 Skills framework for the information age (SFIA)
  • 4.5 Two examples of accrediting bodies
  • 4.6 Other frameworks
  • 4.6.1 NIST national initiative for cybersecurity education (NICE): the United States
  • 4.6.2 CSEC2017 for cyber and CC2020 for computer science
  • 4.6.3 The cybersecurity body of knowledge (CyBOK) - UK
  • 4.6.4 Cybersecurity skills framework (SPARTA) - Europe
  • 4.7 Discussion
  • 4.8 Conclusion
  • References
  • 5 Dark echoes: the exploitative potential of generative AI in online harassment
  • 5.1 Background
  • 5.1.1 Introduction to generative AI
  • 5.1.2 Applications across various domains
  • 5.1.3 Fundamentals of generative AI algorithms
  • 5.1.4 Autonomous creation of lifelike content
  • 5.1.5 Ethical and security implications
  • 5.1.6 Vulnerabilities exploited by malicious actors
  • 5.2 Significance of the issue
  • 5.2.1 Escalation of online harassment through generative AI
  • 5.2.2 Legal and ethical quandaries
  • 5.2.3 Strategies for detection, prevention, and redress
  • 5.3 Generative AI and online harassment: an unholy alliance
  • 5.3.1 Exploitative potential
  • 5.3.2 Lifelike content creation.
  • 5.3.3 Automation of harassment tactics
  • 5.3.4 Challenges for detection and response
  • 5.4 Nefarious uses of generative AI
  • 5.4.1 Social media
  • 5.4.2 CEO fraud case
  • 5.4.3 Virtual kidnapping scam
  • 5.4.4 Revenge pornography
  • 5.4.5 Manipulation in financial markets
  • 5.4.6 AI-generated phishing emails
  • 5.4.7 Synthetic identity fraud
  • 5.4.8 Fabricated evidence in legal cases
  • 5.4.9 Examination of tactics employed by harassers
  • 5.5 Legal and ethical quandaries
  • 5.5.1 Legal challenges in addressing generative AI-driven harassment
  • 5.6 AI in creative industries and copyright challenges
  • 5.6.1 Insufficiency of legislation in the face of AI advancements
  • 5.6.2 Legal and regulatory frameworks for AI
  • 5.6.3 Balancing free expression and user protection
  • 5.7 Ethical concerns
  • 5.8 AI governance, strategy, and the 'good society'
  • 5.9 Strategies for detection and prevention
  • 5.9.1 Advanced AI detection technologies
  • 5.9.2 Collaborative industry efforts
  • 5.9.3 Public awareness and education
  • 5.10 Research and development
  • 5.10.1 Investment in AI ethics research
  • 5.10.2 Engaging the academic community
  • 5.11 Legal and policy measures
  • 5.11.1 Regulatory frameworks
  • 5.11.2 International cooperation
  • 5.12 Technology user policies
  • 5.12.1 Robust platform policies
  • 5.12.2 User reporting mechanisms
  • 5.13 Ethical AI development
  • 5.13.1 Ethical guidelines for AI developers
  • 5.13.2 Incorporating ethical AI into design
  • 5.14 Redress and mitigation
  • 5.14.1 Victim support
  • 5.14.2 Accountability of tech companies
  • 5.14.3 Proactive monitoring for misuse
  • 5.14.4 Collaboration with law enforcement
  • 5.15 Policy and legal measures
  • 5.15.1 Legal frameworks for redress
  • 5.15.2 Regulations to prevent repeat offenses
  • 5.16 Public awareness and advocacy
  • 5.16.1 Awareness campaigns.
  • 5.16.2 Advocacy for victims' rights
  • 5.17 Conclusion
  • 5.17.1 Developing comprehensive policies and legal frameworks
  • 5.17.2 Continuous technological vigilance and adaptive detection mechanisms
  • 5.17.3 Global collaboration, standards, and cooperation
  • 5.17.4 Public awareness, education, and evolving educational programmes
  • 5.17.5 Ethical considerations in AI development
  • 5.17.6 Proactive approach to emerging threats and evolving tactics
  • References
  • 6 Trust and risk: psybersecurity in the AI era
  • 6.1 Introduction
  • 6.1.1 The emerging importance of securing mental health in the digital age
  • 6.2 Trust and risk in AI
  • 6.2.1 Psychological foundations of trust
  • 6.2.2 Context and significance of trust and risk in the AI era
  • 6.2.3 Benefits of trust in AI
  • 6.3 Risks stemming from trust in AI
  • 6.3.1 Over-trust in AI products: abuse and misuse
  • 6.3.2 Under-trust in AI products: disuse
  • 6.4 Psybersecurity risk framework
  • 6.4.1 Mass impact
  • 6.4.2 Direct AI impact
  • 6.4.3 Malicious use of AI
  • 6.5 Discussion
  • 6.5.1 Awareness training
  • 6.5.2 Transparency in AI
  • 6.5.3 Industry regulations
  • 6.5.4 Legal frameworks
  • 6.6 Conclusion
  • References
  • 7 Security through influence over mandate
  • 7.1 Introduction
  • 7.1.1 Make security tangible, not another compliance slideshow
  • 7.1.2 Ease, not pain, should be synonymous with security
  • 7.1.3 Resistance is necessary and good
  • 7.1.4 Security is achieved through continuous, small steps
  • 7.1.5 Security is enablement, not gatekeeping
  • 7.2 Pattern 1: make security tangible, not another compliance slideshow
  • 7.2.1 Adding to the cultural fabric
  • 7.2.2 Accelerating the spread of knowledge
  • 7.2.3 The difference between reading and knowing the incident response plan
  • 7.2.4 The discipline of security chaos engineering
  • 7.2.5 Summary.
  • 7.3 Pattern 2: ease, not pain, should be synonymous with security.