1. The Primacy of Human Connection & Real-World Value
The "Tech-As-Servant" Principle: We build technology to enhance human capability, well-being, or connection, not to replace or consume them. Technology must serve as a bridge to a better real-world experience, never as the destination itself. Our success metric is not "Time Spent on App," but the value and presence returned to the user's life once the device is put away.
Intentional Disengagement: We will proactively design features that encourage users to put the device down once the core purpose of the app’s functionality has been achieved.
Closed vs. Open Networks: We prioritise closed, private human networks (circles of trust) over open public forums. We only utilise public networking if the core purpose of the app necessitates it for societal benefit.
2. Cognitive Liberty & The Attention Economy
Notification Minimisation: We only push notifications when it is absolutely essential to the immediate usability or safety of the app. We will never use "nudge" notifications designed solely to drive engagement or habitual checking.
Elimination of Dark Patterns: We forbid the use of "infinite scrolls," "auto-play," or any interface design intended to exploit psychological vulnerabilities or bypass conscious choice.
No Virtual Rewards or Predatory Monetisation: We do not offer badges, "likes," or "streaks." Furthermore, we strictly forbid "loot boxes," "gacha" mechanics, or pay-to-win loops that mimic gambling or exploit impulsive behaviour.
Rejection of the Attention Economy: We will never place advertisements on our platforms. We refuse to participate in a model where human attention is the product being sold.
3. Accountability, Safety, and Trust
Accountability by Design: We design our systems to make the creation of fake profiles, scams, or fraudulent personas functionally impossible by removing any beneficial purpose for doing so.
User Accountability & Enforcement: We hold users fully accountable for negative or harmful behaviour. Our platforms are built for adults (or supervised minors) who stand behind their actions. We maintain a zero-tolerance policy for illegal activity; such actions result in immediate removal from the platform and, where required by law, reporting to the relevant authorities.
Platform Accountability & Duty of Care: We reject the "passive conduit" defence used by big tech to ignore systemic toxicity. As the architects of these digital environments, we accept a moral and professional duty of care for the health of our communities. We do not use "free speech" as a shield to justify the hosting or algorithmic amplification of identifiable harm or harassment. We take responsibility for the overall health of the environment we provide.
Safety by Design (Anti-Stalking/Harassment): We proactively design against the use of our tech for physical or digital harm. This includes rigorous safeguards against doxing, "stealth" geolocation tracking, and features that could facilitate domestic abuse or stalking.
Anonymity vs. Privacy: While we protect user privacy, we do not provide a veil for toxic behaviour. We believe that a healthier society requires people to be responsible for their digital footprint.
4. Protection of Vulnerable Users & Children
Child-Safe Defaulting: While our products are built for accountable adults, any environment accessible to minors must use the highest possible safety defaults (e.g., maximum privacy, no stranger-to-child discovery, and no commercial exploitation).
Protection of the Elderly and Vulnerable: We design against "social engineering" and "dark patterns" that specifically target those with lower digital literacy or cognitive decline.
Age-Appropriate Interaction: We refuse to build "engagement" loops for children that disrupt developing brain chemistry or interfere with essential real-world socialisation and sleep.
5. Accessibility and Neuro-Inclusion
Universal Design Sovereignty: Accessibility is not a "bolt-on" feature; it is a foundational requirement. Our technology must exceed standard WCAG compliance to ensure that people with physical, sensory, or cognitive impairments have an equivalent experience.
Neuro-Inclusive Architecture: Recognising that many digital harms specifically exploit neurodivergent traits (such as ADHD, Autism, or OCD), we design interfaces that are calm, predictable, and free from sensory overload or forced urgency.
Language and Literacy: We strive for radical clarity. We avoid jargon, "legalese," and complex language that creates barriers for users.
6. Algorithmic Transparency and Emotional Integrity
No Echo Chambers or Radicalisation: We do not utilise algorithms to prioritise content based on engagement-driven "outrage" or to create polarised echo chambers. We design to prevent the "rabbit hole" effect that leads users toward extremist content.
No Emotional Manipulation: We forbid the use of "emotional contagion" testing or algorithms designed to manipulate a user's mood or emotional state for the sake of platform metrics.
Algorithmic Legibility: If an algorithm or AI is utilised to suggest an action or a connection, the user must be able to see why that suggestion was made. We reject "Black Box" logic.
Mandatory AI Identification: Any content, interaction, or entity generated by AI must be clearly and permanently labelled. We forbid "Deepfakes" or AI avatars designed to deceive users into believing they are interacting with a real human being.
7. Truth, Reality, and Synthetic Content
Mandatory AI Identification: Any content, interaction, or entity generated by AI must be clearly and permanently labelled. We forbid "Deepfakes" or AI avatars designed to deceive users into believing they are interacting with a real human being.
Content Provenance: Where possible, we implement digital watermarking or metadata standards to verify the origin and authenticity of content shared on our platforms.
8. Intellectual Property and Creator Rights
Respect for Copyright: We respect the intellectual property of human creators. Our technology will not be built to utilise copyrighted material without explicit consent and fair compensation.
Ethical AI Training: If a venture utilises generative AI, it must ensure that the underlying models were trained on ethically sourced datasets that respect "do not train" requests.
9. Data Privacy & Sovereignty
Data Non-Commercialisation: We will never sell user data. Period.
Ethical Personalisation: We only utilise specific user data when it provides a direct, functional benefit to the user’s immediate experience, ensuring zero negative effects on user autonomy.
The Right to Vanish: Users have the right to permanently and easily delete all data associated with their account at any time.
10. User Ownership and Content Sovereignty
Absolute Ownership: We believe that users are the sole owners of the content they create. We do not claim ownership of user-generated content.
Functional-Only Licensing: Our platforms only require a limited, non-exclusive licence to host and display content for the sole purpose of the service's functionality as intended by the user.
11. National Sovereignty and Security
Legal Compliance with Ethical Priority: We commit to complying with the laws of the nations in which our users reside. If local laws compel a violation of this Code, we will prioritise user safety and seek legal or technical avenues to maintain our ethical integrity.
Resistance to State Surveillance: We commit to robust, end-to-end encryption without "backdoors" for any government or third party.
12. Interoperability and the "Right to Exit"
No Ecosystem Lock-in: We support open standards that allow users to export their data. We stay successful through value, not entrapment.
Product Longevity: We avoid "planned obsolescence" and aim to support older hardware to ensure accessibility.
13. Economic, Revenue, and Environmental Alignment
The Wellness-Profit Loop: Our financial success must be a direct byproduct of our customers becoming happier, healthier, and more connected.
Environmental Stewardship: We optimise our code to minimise energy consumption, recognising the physical footprint of digital products.
14. Governance and Longevity
Constitutional Integrity: These principles are to be embedded into the legal DNA (Articles of Association) of every startup we launch.
The "Mission Veto": Good add Ventures (GaV) retains a "Mission Share" in our startups, providing a legal veto against any future attempts to abandon this Code.
By building under this code, we aim to prove that the most successful companies of the future will be those that treat human well-being as their primary asset.