
Generative AI in Cybersecurity
Versandkostenfrei!
Erscheint vorauss. 20. November 2025
38,99 €
inkl. MwSt.
PAYBACK Punkte
19 °P sammeln!
The rapid advancements and growing variety of publicly available generative AI tools enables cybersecurity use cases for threat modeling, security awareness support, web application scanning, actionable insights, and alert fatigue prevention, they also came with a steep rise in the number of offensive/rogue/malicious generative AI applications. The result is a new era of cybersecurity that necessitates new approaches to detect and mitigate cyberattacks. With large language models, social engineering tactics can reach new heights in the efficiency of phishing campaigns and cyber-deception in ge...
The rapid advancements and growing variety of publicly available generative AI tools enables cybersecurity use cases for threat modeling, security awareness support, web application scanning, actionable insights, and alert fatigue prevention, they also came with a steep rise in the number of offensive/rogue/malicious generative AI applications. The result is a new era of cybersecurity that necessitates new approaches to detect and mitigate cyberattacks. With large language models, social engineering tactics can reach new heights in the efficiency of phishing campaigns and cyber-deception in general. This book is a review of technologies, tools, and approaches in this rapidly increasing field. Specifically, it looks into the most common generative AI tools used by malicious actors, outlines cyber-deception techniques realized using generative AI, and the security risks of large language models. It covers malicious prompt engineering techniques hackers use in chatbots for jailbreaking common defenses, such as via a DAN prompt, the switch technique, or character play, noting that unsafe code can be generated with genAI chatbots even without jailbreaking (such as via modular coding). Being familiar with these is important not only to understand how threat actors bypass security mechanisms of chatbots, but also to be able to use chatbots for ethical hacking without being blocked (differentiating between legitimate and nefarious use). This book also discusses how text-to-image, text-to-speech, and text-to-video diffusion models are used in the wild for cyber-deception, and deepfake detection techniques to fight against this. The reactive countermeasures covered also include spam detection and online harassment protection. Proactive countermeasures are suggested to make generative AI models less susceptible to misuse, from hardening security of generative AI services and tools to securing generative AI use.