One cannot jump to the “How” when the “What” has NOT been properly defined. Despite the EU being the first to come up with their Artificial Intelligence (AI) Act and Canada being at the forefront of digital assets laws, there is NO first mover advantage in rule making. It is better late than never if we can get it right when framing the regulatory solutions to the problems in the AI and Crypto spaces. This 3-part sequel aims to decipher the noumenon of and define what AI, DeFi, tokenization and cryptocurrencies truly are.

Definition of AI
AI is NO ordinary “machine-based systems” or “automations”, but a “cognitive system” capable of “learning” to continuously improve the Functioning of a Computer / to any Other Technology or Technical Field (e.g. a GPS Satellite system is NOT AI, but traffic predictions and personalized recommendations are). By referring to AI as “cognitive”, the focus is on the system’s internal (mental like) processes, rather than SOLELY on external behaviors and consequences.
Learning per the “Dog Salivating Theory” that nurtures voluntary behavior by pairing external conditioned stimulus to associate two or more phenomena, such technique alone does NOT constitute it as AI. Computational techniques that mimic pet training in itself are insufficient or unlikely to pose an existential threat to humanity, except exploitation of Dopamine for addictive behaviors. Also, the presence or absence of “operant conditions” could merely be automation in itself to modify voluntary behavior through external consequences (reacting to the laws) of reward and punishment. Without combining it with other internal (mental like) computational techniques, such systems should NOT be considered as AI.
CAPTCHA – a security test that uses a "Turing test" to differentiate between humans and bots is NOT AI in itself. However, Google reCAPTCHA system is part of an AI that consists of internal processes (e.g. keylogging to track and analyze user interactions, such as mouse movement and typing patterns) to determine if a user is human. Keylogging without a user’s consent could be invasive to privacy; hence it should be a regulated activity. Another computing activity that should be regulated is – the use of AI to help bots bypass CAPTCHA or the like security test, except when used for ethical hacking.
fA crawler or web scraper that automatically extracts data from an external environment (web) is NOT inherently an AI. When combining crawler’s function with internal processes involving intelligent data analysis to enhance the data collection with contextual understanding (rather than just the keywords search), then such system is an AI.
A scanner or camera to surveil the public area outside of private property is NOT an AI. Adding a sophisticated system that has internal processes to control one or orchestrate multiple surveillance camera(s) to enhance the monitoring with contextual awareness (e.g. facial recognition to analyze identity) is an AI.
We despise heavy-handed government policies to brutally force AI firms to censor / filtering so-called “unsafe behaviors / outputs” or require adversarial training to ban or reveal what authoritarian may constitute as “vulnerabilities”. We are thankful for Vice President JD Vance remarks at the AI Action Summit, in particular “AI must remain free from ideological bias, and that American AI will not be co-opted into a tool for authoritarian censorship.” It helps address civic concerns over massive government surveillance. NOTE: “Ideological bias” is a human bias driven by political or social belief. Whereas “bias” in many domains – especially competitive ones like defense or finance – bias is not just inevitable, it is essential to: prioritize certain outcomes (e.g. speed over accuracy, stealth over transparency), reflect strategic performance (e.g. risk tolerance, adversary modeling), and/or to exploit asymmetries (e.g. alpha in trading, deception in war).
There are different AI machine learning algorithms, some use cognitive reasoning for multi-step strategic plans (e.g. chess game) where “bias” is essential. Others use non-reasoning (or generative) models excel at fast, pattern-based tasks like content generation or chatbots where consensus building and/or optimization for the most commonly accepted respond (consistency in reproducibility of outcomes) is prioritized. One size does not fit all.
Bias can be “conscious” and “unconscious”. A cognitive system does NOT have to be conscious. Neuroscientists believe consciousness could be a distributed process that does not depend on a singular “self”. Unconscious bias can inflict harm amid unintentional – negligence. Cognitive systems with no recommendations should NOT escape AI responsibilities. “Bias” depends on social norm. Social norm evolves overtime.
“Signal amplification” is an absolute necessity in sequencing technologies. It enables detection, ensures accuracy via redundancy, and is particularly useful when working with limited input material. However, amplification inherently introduces “technical bias” into sequencing data – a known and acknowledged challenge, but it may be better than the inability to generate any data at all.
The US trade surveillance process objectively checks if a trade has the effect of altering the worth of a target. It is a red flag of market manipulation if the trade caused a “bias” in market mechanism. The challenge is how to distinguish between a systematic technical bias or human-made artifact in data, from natural evolutionary bias or selection. Content preference and context bias are highly dependent on the specific use case or application, i.e. largely subjective and situational.
“Normalization” is the statistical data smoothing process that meant to address “bias”. Yet, whenever one applies computational methods to the data, the relationship between the true “signal” and the technical “noise” is inherently altered. There is a trade-off in gaining clarity of signal at the potential cost of introducing subtle “new biases” or suppressing real, but weak, “signals”. Strive for the most timely, accurate, relevant, and complete data where possible to avoid excessive manipulation in normalization for the best sequencing results. Unfortunately, consolidated tape without time-lock encryption to make market data available securely in synchronized time caused “initial bias” that exacerbates the gap between subscribers of proprietary feeds and the public SIPs.
Hallucination is considered an output that is out of norms. Hallucinations are like dreams (a state of consciousness that one’s “awareness” of external environments may be out of synch), except dreams may be more vivid / emotion than hallucinations. Human’s five senses are less active when dream. When AI sensory attenuation primarily focused on language or visual images, it undermines other sensory inputs, such as sound, touch, smell and taste, etc.
AI is often being mythologized by humans as all-knowing. People expect instant gratification. Yet, AIs are like replicas of humans. There will be occasions of “I don’t know”, irrelevant fluff being generated, stuttering, or words could not catch up with thoughts. The ability to form and process complex thoughts is distinct from the ability to articulate fluently and coherently. Improve context awareness, better adversary training, use of ensemble learning, and multimodality all contribute to reducing AI hallucinations but cost extra time, efforts, and may introduce noise.
Unlike data extraction that can be timely, accurate and complete if one is willing to pay extra, intelligence may never be 100%. I.e. AI prediction or advice seldom possess all attributes simultaneously due to in inherent constraints, such as information asymmetry, the tension between speed and quality, cognitive limitations, and the dynamic nature of reality.
Do NOT lambast AI for hallucination. Humans often fail to think critically, unable to synthesize information from various sources for evaluating information to find deeper meaning, and lack adaptability and creativity. Yet, humans like to dream and imagine. Should cognitive systems be allowed to dream – a possible indicator of Artificial General Intelligence? AI hallucinations may discover unknown unknowns which were previously nonsensical to human. To better understand nuances and enhance AI performance, policy makers should incentivize the industry to turn “unknowns” into “knowns.”
The last administration inappropriately assumed or interpreted “AI bias” as systematic and repeatable error in a computer system that creates unfair outcomes, such as disadvantaging a particular gender or race. This contradicts with the current administration’s merit-based policy (EO 14173) that dismantles Diversity, Equity and Inclusion (DEI) initiatives. Trying to “neutralize” biases in pursuit of consensus or fairness can dilute the US strategic advantage, especially when foreign adversaries are not playing by the same rules.
The EU AI Act mandates development of a Code of Practice on General-Purpose AI is problematic. Best practice sharing is not wrong. Concern is – how they would consider “the Code’s design and build coherence where needed”? Regurgitating GRC tools as AI compliance is the wrong approach. “Coherence” limits creativity. Differentiation is what drives innovation. The EU Digital Markets Act aims to ensure “fair competition and practices among large online ‘gatekeeper’ platforms like search engines and app store” is nothing but a protectionism policy. Invoke Antitrust laws may suffice.
China recently released their “AI Safety Governance Framework 2.0” (CN-AISGF2). Their cybersecurity law, computer crime criminal law (Articles 285-287), and Personal Information Protection Law are their broader policies that prioritize their national security and state control. CN-AISGF2 looks undeniably comprehensive as we compared it against the 2022 version of the US NIST-AIRMF (see pages 9-10 of our comment letter). Their usage of familiar GRC best practices make it appealing for foreign jurisdictions to adopt it. The US NIST-AIRMF playbook and related guidances if blindly continuing the oversimplified “Govern” in center of “Map, Measure, and Manage” (GMMM) path may end-up similar to CN-AISGF2.
In order for the US to exert influence on Global AI policies, the US AI regulatory regime should re-center the focus on the identified key AI risks (energy; addictive, herd and/or polarized behaviors / destroy humans’ abilities to think independently; censorship; hyper optimization; insurgent / unhealthy competition) to mitigate the downfall of humanity. Policy Makers should consider the Asimov’s Three Laws and Zeroth (Forth) Law for AI. The ISO 23894 Risk, ISO 42001 management system, and ISO 38507 governance frameworks should be redirected accordingly. Meanwhile, the Department of Justice’s Computer Fraud and Abuse Act (CFAA) has a narrower statute if compared to other jurisdictions. CFAA is meant to target external hackers’ unauthorized access and damage; it does NOT impute liability to internal workers who disregard a use policy. Bypassing code would constitute a cybercrime ONLY if the code is a “real barrier” as opposed to a “speed bump”. “Function creep” is concern we identified with the FINRA Consolidated Audit Trail (CAT) system. Adverse scenarios with government and bank systems have happened with serve consequences.
The original inventor may never come up with an exhaustive list of usage purposes or anticipate the possible repurpose of his/her technology. It is unjust to require AI firms to “establish comprehensive and explicit enumeration of AI systems’ context of business use and expectations.” Do not expect examiners to truly understand every bit of “contextual factors may interact with AI lifecycle actions”, for they are rule and law enforcers not technologists. Free enterprise should NOT be obligated to reveal the secret ingredient of their technologies, unless it is being identified with evidence for suspicious crime.
We agree with the NIST AI Risk Management Framework (NIST-AIRMF) Playbook (page 63 - Map 1.4) where it said, “Socio-technical AI risks emerge from the interplay between technical development decisions and how a system is used, who operates it, and the social context into which it is deployed.” Yet, these are applicable to all technologies, not just AI. There are already a long list of US data / information security standards and technical safeguarding requirements, such as PCI DSS, GLBA, SOX DS 5.7, 5.8, 5.11, 11.6, HIPPA ePHI §164.306, §164.312, etc. Rights to revoke a user agreement with a standard provision, such as “no illicit or manipulative use of technology” may suffice.
The collective thoughts above lead us to recommend a revised definition of AI under 15 U.S.C. § 9401(3) to preempt state AI laws, we suggest using the following or the like:
“Covered AI technologies refer to cognitive systems (beyond learning from pairing a neutral stimulus that becomes a conditioned stimulus), comprise of memory AND topology of known lessons, that learn from regularities and irregularities of pattern(s) / knowns and unknowns / models/ simulations, AND
the system’s internal process EITHER comprises of multi-steps reasoning (understand in a way that mimics humans; NOT merely extracting signals to generate alerts; NOT unconscious thinking) OR capable of generating datum uniquely different from a plagiarized copy, that
manipulates or presents at least an abstracted phenomenon (person or avatar, thing or computer-generated element, or real or virtual event that is hypothetical or observed to exist or happen in a distanced past, real-time, or irrespective of space-time) in a metaverse, real, or virtual environment
autonomously OR follow commands/ instructions, to generate expectations, make-believe, or assert that certain selected or perceived phenomenon is or will occur / available for use (regardless of the system internalizes, consumes, or makes feed(s) / datum available to its users in a domain, a dark web, or any iteration of the internet or intranet), AND
through action (including provision of customized or generic recommendation that reinforces, strengthen or weaken an ideology) OR inaction to stimulate the thought processes of at least an individual human OR the operations of a machine.”
![]() |
|
