International Workshop on Dark Software Engineering (DSE 2026)DSE 2026
What is Dark Software Engineering (DSE)?
Dark Software Engineering (DSE) is the deliberate misuse of software engineering practices to deceive, manipulate, or exploit people. It is broader than “dark patterns” in user interfaces: it spans the entire lifecycle—from how requirements are framed and systems are architected, to how features are implemented, tested, deployed, and tuned with data. Examples include manipulative consent flows and defaults, misleading notifications and nudges, deceptive chatbots and recommender systems, growth experiments that hide choices or fatigue users into agreement, metrics and A/B tests optimized for engagement at the expense of user welfare, and AI-generated or AI-amplified content designed to mislead at scale.
Why is this urgent and relevant?
-
Real-world harms and accountability are rising. Users, regulators, and industry are calling out deceptive designs and practices; teams need concrete engineering guidance to avoid them.
-
AI accelerates manipulation. Generative and predictive models enable scalable, personalized persuasion and misinformation. We need software-engineering tools and processes that can detect, test, and mitigate these risks.
-
The engineering gap is real. Discussions often live in HCI, policy, and ethics venues, while actionable methods for requirements, design, implementation, testing, deployment, and auditing lag behind. This workshop centers the software engineering lens and connects it to policy and HCI so solutions are practical and enforceable.
-
Community and knowledge are fragmented. We aim to bring together researchers, practitioners, educators, and policymakers to share cases, tools, datasets, and curricula.
Goals of the Workshop:
- Establish a shared vocabulary and taxonomy of DSE practices across the software lifecycle.
- Surface case studies that show how deceptive practices arise in real projects, including organizational incentives and constraints.
- Identify analysis, testing, and auditing methods (manual and automated) to detect and prevent DSE, including for AI-enabled systems.
- Discuss processes and governance (reviews, checklists, red-teaming, design critiques, data and model documentation) that help teams resist perverse incentives.
- Bridge communities—software engineering, HCI, AI, law/policy, and industry—to co-create feasible interventions.
- Define a research and action agenda: benchmarks, datasets, open tools, curricular units, and best-practice guidelines.
Topics of interest
Includes but not limited to:
- Taxonomies and classifications of dark patterns and dark software engineering across requirements, design, implementation, testing, and operations.
- Case studies from industry or the public sector documenting harms, root causes, and remediation.
- AI deception and manipulation: persuasive chatbots, synthetic media, recommender systems, data-poisoning, prompt and UI-level deception.
- Adversarial and manipulative UX: consent flows, default settings, confirmshaming, forced continuity, disguised ads, obstruction and nagging patterns.
- Organizational dynamics: incentives, OKRs and metrics that drive DSE; governance models that counteract them.
- Tooling for detection and prevention: static/dynamic analysis, UI linting, pattern detectors, log and telemetry audits, compliance tooling, “ethics as code.”
- Testing and evaluation: experiment design that avoids deceptive outcomes, guardrail tests, red-teaming for UX and ML systems, human factors evaluation.
- Documentation and transparency: model cards, data sheets, decision logs, audit trails, changelogs that surface user-impacting changes.
- Policy and regulation: compliance-aware engineering for consumer protection, privacy, AI governance, accessibility, and advertising rules.
- Education and training: curricula, exercises, and practitioner guidelines for ethical software engineering and dark-pattern avoidance.
- Open problems: benchmarks, datasets, and shared infrastructures for studying and mitigating DSE.
Call for Papers
Dark Software Engineering (DSE) focuses on the deliberate misuse of software engineering practices to deceive, manipulate, or exploit users. Beyond user-interface “dark patterns,” DSE spans requirements, architecture, implementation, testing, deployment, and operations, including AI-enabled deception (e.g., persuasive chatbots and synthetic media). The DSE’26 workshop brings together software engineering, HCI, AI, industry, and policy communities to define the problem space, share evidence and tools, and shape concrete engineering responses.
Topics of interest
Includes but not limited to:
- Taxonomies and classifications of dark patterns and dark software engineering across requirements, design, implementation, testing, and operations.
- Case studies from industry or the public sector documenting harms, root causes, and remediation.
- AI deception and manipulation: persuasive chatbots, synthetic media, recommender systems, data-poisoning, prompt and UI-level deception.
- Adversarial and manipulative UX: consent flows, default settings, confirmshaming, forced continuity, disguised ads, obstruction and nagging patterns.
- Organizational dynamics: incentives, OKRs and metrics that drive DSE; governance models that counteract them.
- Tooling for detection and prevention: static/dynamic analysis, UI linting, pattern detectors, log and telemetry audits, compliance tooling, “ethics as code.”
- Testing and evaluation: experiment design that avoids deceptive outcomes, guardrail tests, red-teaming for UX and ML systems, human factors evaluation.
- Documentation and transparency: model cards, data sheets, decision logs, audit trails, changelogs that surface user-impacting changes.
- Policy and regulation: compliance-aware engineering for consumer protection, privacy, AI governance, accessibility, and advertising rules.
- Education and training: curricula, exercises, and practitioner guidelines for ethical software engineering and dark-pattern avoidance.
- Open problems: benchmarks, datasets, and shared infrastructures for studying and mitigating DSE.
Submission guidelines
Submission types
-
Long papers: Up to 8 pages — mature contributions with completed results, rigorous evaluation, or in-depth case studies.
-
Short papers: Up to 6 pages — concise original work, early results, focused experience reports, or industry demonstrations.
-
Extended abstracts: Up to 5 pages — position or experience pieces with emerging results, lessons learned, open problems, or novel ideas; exempt from APCs under current ACM open-access
Originality and review
-
Submissions must describe original work not published or under review elsewhere.
-
The review process is single-blind: author names and affiliations must appear in the submission.
-
Each submission will be reviewed by at least three reviewers and evaluated for clarity, relevance, originality, and contribution.
-
Submissions will receive a preliminary desk check for scope, format, and page limits compliance.
Formatting and submission
-
Submit in PDF via HotCRP.
-
Submissions must strictly conform to the ACM conference proceedings formatting instructions. Use the ACM Primary Article Template in the double-column format and follow all ACM proceedings instructions.
-
Page limits include the abstract, figures, tables, and references.
-
There is no limit on the number of submissions per author. Each submission will be evaluated on its own merits.
Publication and presentation
-
Accepted papers will be published in the ACM Digital Library.
-
At least one author of each accepted paper must register for DSE’26 and present the work at the workshop.
APC note
As per current ACM open-access policies, extended abstracts are exempt from APCs. For other paper types, standard ACM policies apply.
Contact
For scope questions or clarifications, email the co-organizer Mohamad Kassab: mkassab@bu.edu