No.31757
no. it leaked the claude code tools, not the LLM. kuso OP can't even be bothered to watch their own kuso youtuber slop.
I will accept apology only in the form of self-thread delete or filmed ritual suicide.
Deleted by cool-guy from >>>/maho/7041 Post No. 7043 from thread 7041 No.31758
The video discusses a significant incident involving Anthropic, the creators of the Claude AI models, regarding a perceived "leak" of their source code. Here is a summary of the key points:
The "Leak" vs. Reality
Despite the sensational title, the video clarifies that this wasn't a malicious hack or a total repository dump. Instead, it refers to Claude's System Prompt—the internal set of instructions that tells the AI how to behave—becoming publicly visible or "leaked" through specific user interactions.
Key Details
The System Prompt: The "source code" being discussed is actually the extensive, high-level natural language instructions Anthropic uses to keep Claude helpful, harmless, and honest.
Safety Guardrails: The leak revealed the specific constraints Anthropic places on Claude, such as how it should handle controversial topics, its refusal to generate certain types of content, and its instructions to be concise.
The "Mistake": The "tragic mistake" often refers to how easily these instructions were extracted using prompt injection techniques (basically tricking the AI into reciting its own rules).
Why It Matters
Competitive Intel: For other AI companies, seeing how Anthropic "tunes" Claude's personality provides a roadmap for fine-tuning their own models.
Model Transparency: For users and researchers, this provides a rare look at the "brain" of the AI and how developers attempt to hard-code ethics and personality.
Vulnerability: It highlights a recurring issue in LLMs: the difficulty of keeping "system-level" instructions hidden from the end-user.
Note: In the world of AI, "source code" usually refers to the underlying architecture (the weights and math), while "system prompts" are the instructions. While the video uses the term "source code" for clicks, the actual event was the exposure of the system instructions.
No.31759
One is "kill yourself" and the other is AI spam.
The thread itself is low quality and OP admits it's clickbait but there are still limits to how you should respond to someone having horrendous taste in youtube videos
No.31763
>>31759I stand by my response. Really, asking them to commit seppuku for their transgressions is the kindest response. Clickbaiters deserve a fate worse than death. Look at the logical conclusion on 4chan when you allow such low effort filth to fester unchecked.
No.31805
when I load ComfyUI part of the terminal text says
Found comfy_kitchen backend cuda: {'available': True, 'disabled': False, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_mxfp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8', 'scaled_mm_nvfp4']}
and I always smile when I see 'apply_rope' it sounds like a typical shitpost way of telling someone to commit suicide