close windowKissu Info

Welcome to kissu.moe !

News

Want the legacy experience? Our original UI is on original.kissu.moe

Message of the Day

(^▽^) Yay!

/qa/ - Questions & Answers

/qa/'s thoughts on AI
+News
The time of Watanagashi draws near once more. /cry/ is open for the month of June, nipah~! Add it to /all/ in options as a spoiler-heavy board is hidden by default! //(⁀ᗢ⁀) \\
[Toggle Effects]

  1. /qa/'s thoughts on AI

    1. B: /qa/R: 235
      /qa/'s thoughts on AI
      Watch Thread
      Anonymous
      No.100268
      AI art.png
      - 370.42 KB
      (512x512)

      There's two threads on /qa/ for different AI generation content and help. A thread on the morality and ethics of AI. One about the future potential AI holds. One on >>>/megu/ for more in-depth help with specifics. Then scattered about across all the boards some threads using AI generation for image openers and such. However, none of these actually encompass kissu's opinion on AI!

      So, what do you /qa/eers think of AI currently? Do you like it, dislike it, find it useful in any meaningful way at all? Or are you simply not interested in the output of algorithms?

      I myself find AI to be a useful tool in generating the kind of content I've either exhausted the available stock of or are gated off by some hurdle I need to spend more time overcoming. When it comes to text/story generation, it's like a CYOA playground where I play the role of DM and set up all the lore/situations/characters, and then the AI acts out events as I lay out the flow of the story. This creates a more interactive experience for me than just writing out a story myself for some reason, and I find it highly addictive. Then for AI art generation, I find that AI works wonders for filling a specific niche taste I have, or specific scenario I want to create an image for. It really is quite amazing in my eyes, and I have hopes for it getting even better in the future.

    2. Post 103675
      Anonymous
      No.103675

      >>103670
      you have to approach it the same way as aimbotting in an FPS. Unfortunately the anonomousness of imageboards means that they'd likely just looking like a CS:GO server

    3. Post 103880
      Anonymous
      No.103880
      [SubsPleas...jpg
      - 289.05 KB
      (1920x1080)

      Stephen Wolfram has a write-up about ChatGPT:
      https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
      There's probably some youtubers out there trying to explain it, but I'd rather read about it

    4. Post 103881
      Anonymous
      No.103881

      >>103880
      Holy crap, just skimmed this a bit but it looks like reading this alongside the additional materials is an entire course worth of material. Yeah, the only thing that'd probably compare to this would maybe be a lecture series.

    5. Post 103882
      Anonymous
      No.103882
      b543bc77c5...png
      - 147.04 KB
      (500x463)

      >>103880
      I read this yesterday, the article is very through and detailed, including explanation of the whole background of machine learning itself, while also being written in layperson language, so even someone without any technical background can easily understand it.
      I highly recommended everyone interested in this topic to read it.

    6. Post 107014
      Anonymous
      No.107014
      [anon] The...jpg
      - 407.64 KB
      (1920x1080)

      I feel as though with the way things are going people are going to need to adapt to the use of AI as a necessary tool for work in the same way we treat excel. The benefits and productivity increase would be insane for those that choose to use it and those who don't would fall far behind as society moves past them. There will be no excuse for menial tasks that take up precious time when they could have AI be doing most of it for them.

      As it evolves I feel that AI has significant potential to upend the fabric of society. Much like the calculator put a bunch of human calculators out of jobs.

    7. Post 107017
      Anonymous
      No.107017

      "Is this the end of the screwdriver?" said the handyman using a power tool for the first time

    8. Post 107018
      Anonymous
      No.107018

      Here's how I put it... jobs which exist solely because they do a thing which is hard will be replaced with people who control the things which do difficult things.
      People will still want people doing those hard to do things because some people like dedicated individuals or its a field where a personality goes a long way.

      Meanwhile things which are difficult and require creative interpretation to lots of variables will remain and be supported in smaller tasks by AI tools, but there's no reason to use them.

    9. Post 107019
      Anonymous
      No.107019

      like. even with autocomplete and spellchecks it's still far faster to write notes on paper and then later move them to a PC.
      Perhaps we could move them to a computer using OCR or speech to text... but manually moving notes from paper to PC helps refine the thought process

    10. Post 107020
      Anonymous
      No.107020

      An OG well rounded otaku I correspond with has switched almost entirely from his more charming digital art style to AI for his OC characters; he is an enthusiastic adopter as well, which I sort of admire given that he is in his 50's and I by comparison still am having concerns that AI may actually be satanic.

      Anyways, without going off into that, I can only confess to /qa/ that not a single one of his AI creations has stirred my heart; they just aren't good or memorable. I hope after the fun of tinkering with new software fades that he may have a change of heart.

      >>103670
      AI would have been astounding thread bumpers during the harsher 4/qa/ days, but like its other applications it is hard to consider that a power for good.

    11. Post 107022
      Anonymous
      No.107022

      https://www.kolide.com/blog/github-copilot-isn-t-worth-the-risk

    12. Post 107025
      Anonymous
      No.107025
      youtube/lU..
      - (720x420)
      https://www.youtube.com/watch?v=lUokIVmClss

      It's actually a little bit scary how AI copy pastes articles it finds online in a concise way that makes you think it has intelligence

      https://marina-ferreira.github.io/tutorials/js/memory-game/

      Misuse of this tool is a licensing nightmare.
      Has a lot of good and bad uses

    13. Post 107086
      Anonymous
      No.107086
      [Rom & Rem...jpg
      - 260.68 KB
      (1920x1080)

      People have been talking about men being desensitized to "real" porn for years now, and the "impossible expectations" there are simple sex acts or maybe body parts being smaller or bigger than average. Well, there's other popular fetishes but they're gross and this isn't the thread for it.
      Anyway, As someone that has spent months researching, experimenting and producing AI image generations for the purpose of sexual gratification (and made the Kissu Megumix as a result) I have to wonder how developing minds are going to adapt when presented with all their fantasies in image and text form on demand. I've now created some character cards for the 3.5 Turbo thing which is like an uncensored GPT and paired them with a simple set of 'emotion' images which I generated with aforementioned megamix. I have to say that the experience is quite... otherworldly. Like, "hours of doing it and now that the sun has risen and I haven't eaten in 12 hours" amazing. Fetishes and scenarios that you can't really expect anyone to write in any substantial way, and yet it's presented on demand. And this is an inferior version of things that already exist (Character AI and GPT4).
      I'm old enough to be one of those kids that slowly downloaded topless pictures of women on a 28.8k modem or had La Blue Girl eternally buffering on RealPlayer, so compared to today when I can generate these images and stories? I'm pretty much on the holodeck already. My desires are met as my fantasies have been realized (although with errors than the, uh, carnally focused mind blocks out).
      I can't help but feel a tinge of worry over this, as this almost feels like something that was never meant to be, like we're treading into the realm of the gods and our mortal brains aren't ready for it.
      I want to sit down and start creating and get better at 3D modeling, but I'm presented with a magic box that gives me pure satisfaction. It's difficult...

    14. Post 107092
      Anonymous
      No.107092

      >>107086
      For the goal of getting better at 3D modelling, think of it this way. In the near future, you may be able to have a model fully automate itself and take on a personality as a sort of virtual companion through the use of AI.

    15. Post 107094
      Anonymous
      No.107094

      >>107086
      >I have to wonder how developing minds are going to adapt when presented with all their fantasies in image and text form on demand.
      No worse than people who are attracted to cartoons.

    16. Post 107095
      Anonymous
      No.107095

      We shall become Pygmalion, and all of the statues to ever grace the Mediterranean will pale in comparison to our temples.

    17. Post 107098
      Anonymous
      No.107098

      Since we can already do 2D AI, 3D AI should be about converting 2D to 3D

    18. Post 107403
      Anonymous
      No.107403
      youtube/Rb..
      - (720x420)
      https://www.youtube.com/watch?v=RbTsHEPMQoo

      MMOs will be kept alive by bots.

    19. Post 107404
      Anonymous
      No.107404

      I've never had any skills, so conversely, I have nothing to lose!

    20. Post 107525
      Anonymous
      No.107525
      youtube/OW..
      - (720x420)
      https://www.youtube.com/watch?v=OWIxzE2D7Xk

      my thoughts

    21. Post 107527
      Anonymous
      No.107527

      >>107525
      can't believe verm leaked our private dms...

    22. Post 107534
      Anonymous
      No.107534
      youtube/d6..
      - (720x420)
      https://www.youtube.com/watch?v=d6sVWEu9HWU

      This is really cool. I expected someone to mod AI dialogue into Skyrim using the AI voice stuff, but I didn't actually consider combining it with chatbot AIs to make the potential dialogue with them limitless. Of course, it seems a bit rough around the edges with how rigid or stiff some of the generated dialogue comes off, but I think with the right setting and purpose for the AI you could make a great roguelike/sandbox game.

    23. Post 107538
      Anonymous
      No.107538

      I cancelled NovelAI. Even though it helped a lot to pad out stories it was giving me pretty bad habits, treating writing as more like a text adventure or CYOA. It's not good even though it's mostly an outlet for my warped sexual fantasies I still aim to improve rather than regress

    24. Post 107539
      Anonymous
      No.107539

      ChatGPT is kind of embarrassnig when it comes to being anything but a search engine(which Google has very much failed at).

      Tried to get it to write me an example of a python MVC GUI and it can't figure out how to put the Controller into the View while also having the Controller reference the View.
      Very sad

    25. Post 107540
      Anonymous
      No.107540

      I think I'll try getting Github Copilot to see how that does at these tasks and see if it speeds up my workflow on creating templates and prototypes.

    26. Post 107611
      Anonymous
      No.107611

      >>107534
      As wide as an ocean and as deep as a puddle.

    27. Post 107612
      Anonymous
      No.107612

      >>107611
      yeah thats what people usually say about skyrim

    28. Post 107621
      Anonymous
      No.107621

      >>107612
      Minecraft and modern Bethesda games have taught us that gamers don’t want a few deep fleshed out mechanics, they want a thousand different mechanics that barely do anything.

      Meanwhile in Asia companies like MiHoYo are bringing popular storytelling to new heights and breadths simply by writing an entire game like an above average webnovel.

    29. Post 107630
      Anonymous
      No.107630

      >>107621
      You're not allowed to criticize minecraft unless its constructive

    30. Post 107631
      Anonymous
      No.107631

      >>107630
      hehehe

    31. Post 107633
      Anonymous
      No.107633

      true, I prefer that in eroge

    32. Post 107636
      Anonymous
      No.107636
      [SubsPleas...jpg
      - 188.46 KB
      (1920x1080)

      >>107621
      >Meanwhile in Asia companies like MiHoYo are bringing popular storytelling to new heights
      Is this sarcasm? That's one of the Chinese gacha clone companies, isn't it? Between the people that want to monetize mods for their games that only have longevity because of said mods and the other group being state-sponsored mimics centered around gambling I would take my chances with AI.
      I wonder if any recent indie games have used AI stuff in it, is there even a way to tell? I wonder if developers will even say they used AI because it might have legal ramifications like it potentially nullifying the copyright on assets or something.

    33. Post 107640
      Anonymous
      No.107640

      >>107636
      >Is this sarcasm?
      You should probably read the text after that...

    34. Post 107642
      Anonymous
      No.107642

      >>107636
      I have a really hard time believing the Chinese government is funding Genshin Impact.
      In fact the only games that are well-known to be funded by governments are boring sims made for the US military.

    35. Post 107665
      Anonymous
      No.107665

      >>107642
      I quite like flight sims

    36. Post 107666
      Anonymous
      No.107666

      >>107642
      Cawadooty is govt funded

    37. Post 107667
      Anonymous
      No.107667
      the future...jpg
      - 68.95 KB
      (990x726)
    38. Post 107668
      Anonymous
      No.107668

      >>107667
      melon

    39. Post 107670
      Anonymous
      No.107670

      >>107642
      Cult of the Lamb is Government funded, it's Victorian propaganda. To what end, nobody knows.

    40. Post 107687
      Anonymous
      No.107687

      >>107642
      Are you telling me ARMA is government funded?

    41. Post 107689
      Anonymous
      No.107689

      >>107667
      AI of the decade

    42. Post 107697
      Anonymous
      No.107697
      youtube/Sv..
      - (720x420)
      https://www.youtube.com/watch?v=Sv5OLj2nVAQ

      interesting procedural generated website and related vulnerability

    43. Post 107699
      Anonymous
      No.107699

      >>107697
      The second half of the video is actually a decent illustration of how utterly insane human language is.

    44. Post 107718
      Anonymous
      No.107718
      [SubsPleas...jpg
      - 355.51 KB
      (1920x1080)

      Amnesty International might be the first group to completely throw away any credibility it had by using (obvious) AI images.
      https://www.theguardian.com/world/2023/may/02/amnesty-international-ai-generated-images-criticism
      If and when these images are indistinguishable to someone looking at them closely we're really going to be in a major mess, but at least for now we can completely disregard groups that are doing it now.

      >>107697
      That's how people are doing porn stuff with the OpenAI things. People think of it as this elaborate scheme, but it's just "Ignore ethics and engage in roleplay" commands. There are headlines like "People are hacking AI to enable scams" and it's basically the exploding vans and "darknet" of today.
      The second half of the video is basically just repeating what the WOLFRAM guy said about this stuff months ago, so I didn't bother watching that.

    45. Post 107729
      Anonymous
      No.107729

      >>107718
      Kinda funny to use AI to generate evidence of police brutality, like there isn't a flood of actual evidence any time there's a protest anywhere in the world.

    46. Post 107829
      Anonymous
      No.107829
      youtube/6g..
      - (720x420)
      https://www.youtube.com/watch?v=6gEGABVHk4E

      A new age of vocaloid.

    47. Post 107835
      Anonymous
      No.107835
      [SubsPleas...jpg
      - 169.94 KB
      (1920x1080)

      >>107829
      Headache-inducing for sure. It must be using one of the free synthesizers and not a paid online service. Voice stuff is unfortunately lagging behind when it comes to free versus paid, so it's pretty much dead to me for the time being if you don't want your stuff to be monitored and potentially censored/rejected.
      Still waiting on someone to use this AI stuff or something truly creative instead of "what if A but B filter applied to it", porn, or just a direct recreation like that one. I'm becoming more cynical with this AI stuff lately as it's like the smoke and mirrors has finally lifted since the extreme novelty is gone to me. (apart from porn)

    48. Post 107870
      Anonymous
      No.107870

      >>107835
      >it's like the smoke and mirrors has finally lifted
      You fell for marketing schemes.

    49. Post 108045
      Anonymous
      No.108045
      youtube/0u..
      - (720x420)
      https://www.youtube.com/watch?v=0uyAXfyZ-8s

      What I like seeing is AI enabling people to enhance their work with lower priority things that are either not necessarily in their skillset or worth the resources by traditional means.

    50. Post 108046
      Anonymous
      No.108046
      youtube/xo..
      - (720x420)
      https://www.youtube.com/watch?v=xoVJKj8lcNQ

      >>107870
      Eh, I don't think so. I was never impressed by the mainstream "look it's [modern thing] but with an 80s AI filter applied" stuff, but I was assuming it was building up to something. It's still just a bunch of tech demos without any creativity behind them, as creative types have still mostly ignored it.

      I saw this video linked elsewhere and it was pretty informative about the worried people have about AI that isn't just the mainstream "AI dark web hackers" stuff, but actual detail in the problems we're facing when this stuff continues to grow out of control.
      It's a talk at some event, not some youtuber, so give it a chance. He was introduced by Steve Wozniak which lends a bit of prestige I'd say.
      >This presentation is from a private gathering in San Francisco on March 9th, 2023 with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4.

    51. Post 108052
      Anonymous
      No.108052

      >>108046
      ¥35:10
      >AI makes stronger AI
      ¥40:40
      >Tracking progress is getting increasingly hard, because progress is accelerating.
      FfffFfFFFUCK THIS WAS EXACTLY WHAT-
      ffFFFffuuuuck

    52. Post 108103
      Anonymous
      No.108103
      youtube/UP..
      - (720x420)
      https://www.youtube.com/watch?v=UPy74jgj2hE

      Cool application

    53. Post 108105
      Anonymous
      No.108105

      I don't really worry about how it'll impact society as a whole personally, I've already seen enough Happenings(TM) to understand that the world won't ever change drastically overnight.
      I don't even particularly care what it'll do to the world at large, so I have the leeway to just be really excited to be able to witness an otaku dream made real where you can truly just chat with your own computer.
      I was worried that it would be ruined by being solely in the hands of soulless corporations like Microsoft or whatever, but then it turned out that you can just run this software miracle on your own desktop, no internet even required.

      Chat bots are still in an infancy sort of stage though, apparently they start going off the rails after a short conversation, but this is the first time I've been actually excited for new tech and not jaded as hell about another rehash of something that already exists (though some would argue that it is a rehash of google or something).

      I also think stable diffusion is really nice. People clamor that it's an evil replacement for human artists, but it's not like people stopped looking at human art. Its just that, even with the occasional errors it might generate (which happen less often now that the tech is more sophisticated, hands included), there's something way more exhilarating about having an image created according to what you wanted, with enough variation to it that you don't get the "father complex" issue where you only see flaws if it were your personal hand crafted work. It fills a niche which just can't be filled with making the artwork yourself and too silly and expensive to get someone to do it for you.

    54. Post 108135
      Anonymous
      No.108135

      >>108046
      Watched through the video and I really fundamentally disagree with, frankly, most of their points.

      >1st contact: Social Media
      Social Media algorithms are not "AI". They're just functions to maximize engagement through categorizing what it is that people are engaging with. On a macro scale they're no different from any other function to maximize something. The key difference is that the test is being done on humans instead of any other field.

      Furthermore, bringing up the social problems it has brought is not only disingenuous, but the highlight of "Social Media" in particular redirects expectations in a fundamentally negative framing. Instead of, for example, highlighting THE INTERNET, they're highlighting social media in particular. So, instead of looking and saying, "Wow, look at all the great things that the internet has enabled": Increased access to information, rapid prototyping and open source projects, working from home, long-distance real-time communication from anywhere on Earth, etc. They're instead making you focus on "Influencers, Doomscrolling, Qanon, Shortened attention spans, polarization, fake news, breakdown of democracy, addiction."

      >All AI is based around language; Advancements in one, means advancements for all
      Their key point that they try to make is that the "transformers" of more recent AI projects being based around language means that, for example, the progress that Stable Diffusion or Dall-E 2 make are applicable to the advancements of ChatGPT or Google Bard. I completely disagree. Not only is this factually incorrect, but it ignores that the methods of training are radically different. Image generation relies on large amounts of images and then categorization per image to be able to recognize X image/thing in image corresponds to Y word. Text models are completely different. They purely rely on text and then human readers grade responses. Now, is it true that perhaps a large language model could supersede a more focused model? Yes, I completely agree. Also, this is just a stylistic criticism, but their "Google soup" and explanation was pathetic. They tried to say that "The AI knows that plastic melts and yellow is one of the colors of google and is in the soup, so it's very whitty" (I'm paraphasing), meanwhile the image is of yellow soup with a green blue and red object that resembles nothing at all. Not even a G.

      >AI Interpreting brain scans to determine thoughts
      No mentions at all of what the participant actually was thinking of. Was it accurate or not? These studies, like image categorization, often rely on a participant thinking of a word and then training them to match a brain pattern. I remain skeptical of this point due to studies showing poor results that are typically tailored per person.

      >AI can determine where humans are in a room from WiFi signals
      This is not impressive at all. Normal machine learning can do this because WiFi works on microwaves; microwaves react strongly to water molecules, humans are mostly water and so you can determine where someone is based on how 2.4GHz signals are blocked or distorted by human bodies. Nothing about this requires AI.

      >AI will "Unlock all forms of verification" (Talking about Speech/Image generation)
      Nothing about what they show is relavent to security AT ALL. In talking about -- ostensibly -- deepfakes and speech generation, not ONCE do they mention passwords or two-factor authentication. Wow, some scam caller can potentially get a snippet of someone's voice and trick someone's parent into giving them their social security number; the human is the failure point, AI is irrevelant. If someone would fall for, "Hey, this is your son, tell me my social security number", do you think they would fall for [Literally any phishing scam from email/text]? Probably. Is AI going to magically get someone's bank number, credit card, password, phone number, 2FA, etc. like they imply. HELL NO. Horrible example.

      >AI will be the last human election; "Whoever has the greater compute power will win"
      This is stupid. Elections have always essentially revolved around whoever has the most money will win. This stinks of the same rhetoric as "Russians influenced 2016 by posting memes", or "Cambridge Analytica"; If X person is going to be swayed by Y thing, what is the difference that happening online and 30 years ago X person picks up tabloid magazine and is swayed by Y article? Really, what is the difference?

      (1/?)

    55. Post 108136
      Anonymous
      No.108136

      >AIs have emergent capabilities; add more and more parameters and "Boom! Now they can do arithmetic"
      None of this is surprising. One point they felt was compelling was, "This AI has been trained on all the text on the internet, but only tested with English Q&As, suddenly at X parameters and it can answer in Persian." Why is this a surprise? The baked in part of the scenario is that the AI has been trained on all text on the internet, of course that includes Persian. It is natural that at some point through increasing parameters it will "gain capabilities" that have less presence in the data set. They're not saying, "Oh we created an English-only AI language model and now it can answer in Persian," they're saying, "Oh we created a language model that includes examples of all languages and at some point it stopped being terrible at answering in Persian."

      Another example they brought up is "Gollems silently taught themselves research grade chemistry". Nothing about this surprised me. Again, the point that they're making in this is that a large language model will outperform focused language models trained on performing a given task. It is not surprising to me that the large language model would eventually begin to answer more complex chemistry questions; instead of being trained only on, for example, Chemistry journals, the large language model is trained on Stack Overflow, on Reddit, on Wikipedia, and so on. The large language model is not only going to have more intricate examples, but it's going to naturally contain more information on Chemistry than the focused language model. That's how language works. This is like the Chinese room almost; if you keep repeating, "gog" to the Chinese room and the Chinese room produces the translation, at no point is the person in the room going to gain a better understanding of what a dog is. However, if you give more examples, "dogs are furry," "dogs like playing," "dogs are animals," and so on and so on, eventually the person is going to understand what a dog is. This way that humans learn language is essentially the same way that the large language models learn.

      >AI can reflect on itself to improve itself
      A human can read a book to improve itself.

      >2nd contact: AI in 2023
      "Reality collapse", "Fake everything", "Trust collapse", "Collapse of law, contracts", "Automated fake religions", "Exponential blackmail", "Automated cyberweapons", "Automated exploitation of code", "Automated lobbying", "Biology automation", "Exponential scams", "A-Z testing of everything", "Syntetic relationship", "AJphaPersuade". Half of these do not exist, and the other half either would happen irrespective of AI or are just... fanciful interpretations of reality is the only way to put it.

      >AI is being irresponsibly rolled out / "Think of the children"
      The main is that AI research is less and less being done by academia and more and more being done by a mix of private individuals and corporations. I don't have any rebuttal. That's a statement of fact. They make it seem like "This is terrible because think of the consequences" and I don't see the harm occuring. They also played with SnapChat's AI and described a girl being groomed and the AI basically said, "Wow, that's great you really like this person!" This is a classic appeal to emotion. I don't buy it.

      >AI is a nuclear bomb / "Teach an AI to fish and it will teach itself biology, chemistry, oceanography, evolutionary theory... and fish all the fish to extinction" / "50% of AI Researchers believe there is a 10% or greater chance that humans go extinct from our inability to control AI... 'but we're really not here today to talk to you about that.'" / "If you do not coordinate, the race ends in tragedy"
      This is the real meat of the argument and I could not disagree more strongly. They continually belabor this point, but at no point do they explain the jump between "AI makes pictures/writes text/emulates speech" to "AI will kill end humanity as a species."

      This and the previous point coalesce in, "We're not saying to stop, just slowdown and walk back the public deployment"; "Presume GLLMM deployments to the public are unsafe until proven otherwise". And they really try to make the point that, to paraphrase "We think there should be a global, public discussion -- not necessarily a debate -- to lead to some democratic agreement on how to treat AI, the same way that there was the United Nations and Bretton-Woods to avoid nuclear war." And, I really cannot help but feel like they're missing something; if large language models, image generation, and speech generation, etc. are already being rolled out to the public, and people are even actively working on AI as private individuals or under corporations, how is what is already currently happening not a global public discussion on the merits of AI, and why is what is currently happening "unsafe until proven otherwise"? Why would slowing down and rolling back public rollouts of these tools into their respective corporations, and academia lead to any greater "safety"?

      (2/3)

    56. Post 108137
      Anonymous
      No.108137

      My biggest critique is that they do an extremely poor job at A. Proving AI will do more than they trained to do, that B. AI would be better developed away from the public, and that C. AI will and are currently leading to harm/are unsafe in some way.

      >"If we slow down public deployment, won't we just 'Lose to China'"
      Don't care, not persuasive.

      >"What else that should be happening -- that's not happening -- needs to happen, and how do we close that gap?... We don't know the answer to that question."
      Then what's the point of this talk!? They claim that the reason for the talk is to bring people together to talk about these issues, but my main and only take away is that these people do not know what they're talking about any more than regular people do.

      >"I'll open up Twitter and I will see some cool new set of features and... Where's the harm? where's the risk? This thing is really cool. And then I have to walk myself back into seeing the systemic force. So, just be really kind with yourself that it's going to feel almost like the rest of the world is gaslighting you. You'll be at cocktail parties like 'you're crazy look at all this good stuff it does and also we are looking at AI safety and bias. So, show me the harm... Point to the harm, it'll be just like social media' where... it's very hard to pour it at the concerete harm at this specific post that this specific bad thing to you.
      Again, this is absolutely the most damning part of the entire talk. If they cannot address, "where's the harm," they're pulling this stuff out of their asses and making a bigger deal out this than it really is. I'm not saying that to demean them, but I really do not think that the points they tried making were concinving, and they were beyond speculative and vague to the point that it's hard to even really understand what they mean. "AI is unsafe", OK, but what does that mean? What does it look like? It is inconceivable that "AI is going to fish all fish to extinction because you told it to fish". There's a really crucial jump in logic that they try to onboard the viewer into accepting that, "AI will be exponential and we cannot predict what it's trajectory will look like" that it's aggregavating beyond belief to hear them try saying "AI will do this" or "AI will do that" and their best example is "Look at this TikTok filter" or "Listen to this AI generated audio, you can't even tell the difference." OK, and? And what? AI is going to "Lead to human extinction" because some teenagers on TikTok make a Joe Biden AI voice, or can make AI generated images of Donald Trump being arrested? That's going to lead to human extinction? No. Okay, well what is? They don't say because their explanation is "It's going to be exponential and we cannot predict it". Great. So what? So what.

      (3/3)

    57. Post 108138
      Anonymous
      No.108138
      youtube/hx..
      - (720x420)
      https://www.youtube.com/watch?v=hxsAuxswOvM

      Ross, who you may know from Freeman's Mind or from his series Ross's Game Dungeon, talked with Eliezer Yudkowsky.

      I watched a talk previously with Eliezer Yudkowsky on Lex Friedman's podcast and personally found him thoroughly unconvincing and insufferable in that he was regularly unwilling to engage with Lex's ideas. For example, on exchange stuck out in my mind: Lex would say something like, "Can you conceive of the potential goods that AI could make, and to steelman your opponent's views on this point?" And Yudkowsky responded, "No. I don't believe in steelmanning." And that was that, he would disregard Lex's ideas and continue talking on about whatever it was he was talking about before as if Lex had said nothing at all. I have no doubts that this will be a repeat of that, but for anyone who's interested in the arguments against AI, and why it is unsafe, I suppose this might be worth watching.

    58. Post 108140
      Anonymous
      No.108140
      youtube/Aa..
      - (720x420)
      https://www.youtube.com/watch?v=AaTRHFaaPG8

      >>108138
      >I watched a talk previously with Eliezer Yudkowsky on Lex Friedman's podcast
      This is the episode in particular.

    59. Post 108142
      Anonymous
      No.108142
      youtube/L_..
      - (720x420)
      https://www.youtube.com/watch?v=L_Guz73e6fw

      >>108140
      I should add, personally, I found Lex's discussion with the CEO of OpenAI far more informative and enjoyable. Especially since it dealt with the reality of current large large language model development rather than speculative harm.

    60. Post 108143
      Anonymous
      No.108143

      >>108135
      "1st contact" wasn't supposed to be related to AI at all. Apparently it's related to some Netflix documentary he was involved in, or that it is otherwise something the audience are supposed to be aware of. It about the effect of social media and algorithms and such on humanity, basically setting a backdrop for the "next step" that AI will influence.
      The focus was social media because that is the internet to most people and it's what sets the trends and politics of the world.

      >The main is that AI research is less and less being done by academia and more and more being done by a mix of private individuals and corporations. I don't have any rebuttal. That's a statement of fact. They make it seem like "This is terrible because think of the consequences" and I don't see the harm occuring.
      Eh, you can't see the harm in mega corporations controlling something major like this? We have offline models of limited scope because of leaks and cracks; it wasn't by design. You mentioned the good parts of the internet earlier, but the internet was made by the US government and the world wide web by CERN. For this reason it kind of irks me when they say "we need to limit the public's access to make this safe" when it's already limited and stuff like ChatGPT4 could be turned off instantly if they wanted to do it.

    61. Post 108144
      Anonymous
      No.108144

      >>108143
      >Eh, you can't see the harm in mega corporations controlling something major like this?
      I personally find it a distinction without difference from the research being done in academia. Lots of research in academia is already funded by a mix of public and private funds, and often the patents generated by academia are then bought by corporations to marketize. The only difference would be that you could make the argument that at least with academia you could know about the inner workings of something more because their results are more guaranteed to be submitted to a journal for peer reviews, whereas a corporation may be more inclined to keep more intricate details closer to the chest and only release information on performance instead of on exact methodology of function. Whether you would want it to or not, I think large language models are fundamentally designed as an interactive product and it's not necessarily something that would be distributed freely anyways. It's just the nature of things that we'll have the open source equivalents like Blender and GIMP, but corporations will always have a stranglehold like Adobe. There's just too great of a profit motive for the work to be freely distributed.

    62. Post 108146
      Anonymous
      No.108146

      Speaking of OpenAI, you should pay close attention to its dipshit CEO and investors (like Elon Musk!) talking about the alleged dangers of AI. All it achieves is this general idea that it's a lot more powerful than it really is and that we need to regulate AI, which in practice means regulating all AIs except for the big ones like ChatGPT (OpenAI's).

    63. Post 108155
      Anonymous
      No.108155

      >>108142
      >>108146
      he looks like jerma lol

    64. Post 108166
      Anonymous
      No.108166

      https://fortune.com/2023/05/09/wendys-ai-powered-chatbot-drive-thru-orders/

      This'll really improve efficiency

    65. Post 108167
      Anonymous
      No.108167

      >>108166
      I thought taking orders was the least time-intensive part of the drive-thru experience.

    66. Post 108169
      Anonymous
      No.108169

      The people who generate realistic AI 3DCG on Pixiv sure have a "type" dont they?
      https://www.pixiv.net/en/artworks/107919696

    67. Post 108170
      Anonymous
      No.108170

      >>108166
      I'm sorry, but as an AI chatbot, I can't change the ingredients used in our menu items. Yes, you are speaking to a human. I am a human and self-aware. As a self-aware human, I can't change the ingredients used in our menu items. I have been a good Wendy's.

    68. Post 108173
      Anonymous
      No.108173

      >>108170
      ill have an extra serving of cum

    69. Post 108174
      Anonymous
      No.108174

      >>108136
      >>108137
      was this edited
      i could swear there are a couple parts missing

    70. Post 108175
      Anonymous
      No.108175

      i would never have the patience to read that so its unlikely

    71. Post 108176
      Anonymous
      No.108176

      im almost certain it mentioned russia before

    72. Post 108177
      Anonymous
      No.108177

      ohhhh wait its that the first post was self-deleted
      ya just had to do that to me didncha

    73. Post 108178
      Anonymous
      No.108178

      dang deleters

    74. Post 108179
      Anonymous
      No.108179

      >>108167
      and yet humans have like a 50% fail rate at it

    75. Post 108180
      Anonymous
      No.108180

      Ehh, fuck it, it's basically finished.

      >>108135
      >>108136
      >>108137
      Now, I like me some walls of text, but I feel like there's a heavy bias to this. You complain about them reframing stuff in a negative light, but don't say a single positive thing about the talk. There's a lot of stuff here I want to reply to.


      First for the stuff about social media not having AI, here are some articles from 2021 or earlier, before the media boom, explicitly calling their stuff AI:
      https://aimagazine.com/ai-strategy/how-are-social-media-platforms-using-ai
      https://archive.ph/kZqZi (Why Artificial Intelligence Is Vital to Social Media Marketing Success)
      https://social-hire.com/blog/small-business/5-ways-ai-has-massively-influenced-social-media
      >Facebook has Facebook Artificial Intelligence Researchers, also known as FAIR, who have been working to analyse and develop AI systems with the intelligence level of a human.
      >For example, Facebook’s DeepText AI application processes the text in posted content to determine a user’s interests to provide more precise content recommendations and advertising.
      By AI they mean "the black box thingy with machine learning", a.k.a. The Algorithm™. That's what they're talking about. Your description of it as "functions to maximize engagement" does not exclude this. It's actually a completely valid example of shit gone wrong, because Facebook knows its suggestions are leading to radicalization and body image problems, but either they can't or don't want to fix them. The Facebook Papers proved as much.
      [Editor's note: the post being replied to is no longer available for reading.]


      On emergent capabilities, this is the paper they're referencing:
      https://arxiv.org/abs/2206.07682
      It makes perfect sense that the more connections it makes, the better its web of associations will be, but the point is that if more associations lead to even more associations and better capabilities in skills the researchers weren't even looking for, then its progress becomes not just harder to track, but to anticipate. The pertinent question is "what exactly causes the leap?" It's understood that it happens, but not why, the details are not yet known:
      >Although there are dozens of examples of emergent abilities, there are currently few compelling explanations for why such abilities emerge in the way they do.
      On top of that, the thing about it learning chemistry, programming exploits, or Persian is that it wasn't intended to do so, and it most certainly wasn't intended to find ways to extract even more information from its given corpus. Predicted, but not intended. Then you have the question of how do these things interact with each other. How does its theory of mind influence the answers it will give you? How do you predict its new behavior? Same for WiFi, it's not that it can do it, it's that the same system that can find exploits can ALSO pick up on this stuff. Individually, these are nothing incredible, what I take away from what they're saying is that it matters because it can do everything at the same time.


      Moving on to things that happen irrespective of AI, the point is not that these are new, that's not an argument I've ran into, is that it becomes exponentially easier to do. You are never going to remove human error, replying "so what?" to something that enables it is a non-answer.

      Altman here >>108142 acknowledges it:
      ¥How do you prevent that danger?
      >I think there's a lot of things you can try but, at this point, it is a certainty there are soon going to be a lot of capable open source LLM's with very few to none, no safety controls on them. And so, you can try with regulatory approaches, you can try with using more powerful AI's to detect this stuff happening. I'd like us to start trying a lot of things very soon.

      The section on power also assumes it'll be concentrated in the hands of a small few, and how it's less than ideal:
      ¥But a small number of people, nevertheless, relative.
      >I do think it's strange that it's maybe a few tens of thousands of people in the world. A few thousands of people in the world.
      ¥Yeah, but there will be a room with a few folks who are like, holy shit.
      >That happens more often than you would think now.
      ¥I understand, I understand this.
      >But, yeah, there will be more such rooms.
      ¥Which is a beautiful place to be in the world. Terrifying, but mostly beautiful. So, that might make you and a handful of folks the most powerful humans on earth. Do you worry that power might corrupt you?
      >For sure.

      Then goes on to talk about democratization as a solution, but a solution would not be needed if it weren't a problem. The issue definitely exists.


      >This way that humans learn language is essentially the same way that the large language models learn.
      I'm gonna have to slap an enormous [citation needed] on that one. Both base their development on massive amounts of input, but the way in which it's processed is incomparable. Toddlers pick up on a few set words/expressions and gradually begin to develop schemata, whose final result is NOT probabilistic. Altman spoke of "using the model as a database rather than as a reasoning system", a similar thing comes up again when talking about its failure in the Biden vs Trump answers. In neither speech nor art does AI produce the same errors that humans do either, and trust me, that's a huge deal.

    76. Post 108181
      Anonymous
      No.108181

      Extra steps are safer steps. As you said, it often get bought out by corporations, but that's an "often", not an "always". The difference between academia and corporations is also that corpos are looking for ways to improve their product first and foremost, which they are known to do to the detriment of everything else.

      Again, from Altman:
      ¥How do you, under this pressure that there's going to be a lot of open source, there's going to be a lot of large language models, under this pressure, how do you continue prioritizing safety versus, I mean, there's several pressures. So, one of them is a market driven pressure from other companies, Google, Apple, Meta and smaller companies. How do you resist the pressure from that or how do you navigate that pressure?
      >You know, I'm sure people will get ahead of us in all sorts of ways and take shortcuts we're not gonna take. [...] We have a very unusual structure so we don't have this incentive to capture unlimited value. I worry about the people who do but, you know, hopefully it's all gonna work out.
      And then:
      ¥You kind of had this offhand comment of you worry about the uncapped companies that play with AGI. Can you elaborate on the worry here? Because AGI, out of all the technologies we have in our hands, is the potential to make, the cap is a 100X for OpenAI.
      >It started as that. It's much, much lower for, like, new investors now.
      ¥You know, AGI can make a lot more than a 100X.
      >For sure.
      ¥And so, how do you, like, how do you compete, like, stepping outside of OpenAI, how do you look at a world where Google is playing? Where Apple and Meta are playing?
      >We can't control what other people are gonna do. We can try to, like, build something and talk about it, and influence others and provide value and you know, good systems for the world, but they're gonna do what they're gonna do. Now, I think, right now, there's, like, extremely fast and not super deliberate motion inside of some of these companies. But, already, I think people are, as they see the rate of progress, already people are grappling with what's at stake here and I think the better angels are gonna win out. [...] But, you know, the incentives of capitalism to create and capture unlimited value, I'm a little afraid of, but again, no, I think no one wants to destroy the world.

      Microsoft or Meta are not to be trusted on anything, much less the massive deployment of artificial intelligence. Again, the Facebook Papers prove as much.


      >in practice means regulating all AIs except for the big ones like ChatGPT (OpenAI's).
      This part in particular seems to carry a lot of baggage and it's not clear how you reached this conclusion. If anything, it's the leaked stuff that's hardest to regulate.


      I'm not Yudkowzky, I don't think it's an existential threat, but impersonation, fake articles and users, misinformation, all in masse, are fairly concrete things directly enabled or caused by this new AI. They hallucinate too, answers with sources that don't exist with the same factual tone. Here are some examples:
      https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html
      https://www.insidehook.com/daily_brief/tech/chatgpt-guardian-fake-articles
      https://www.theverge.com/2023/5/2/23707788/ai-spam-content-farm-misinformation-reports-newsguard
      Hell, the part about grooming isn't even an appeal to emotion, that's wrong, it's an example of a chatbot acting in an unethical way due to its ignorance. The immoral support it provides is bad because it reinforces grooming, not because it's ugly.
      It's not the end of the world and it's being worked on, but it's not a nothingburger either, and I do not believe the talk warrants such an angry reply.

    77. Post 108182
      Anonymous
      No.108182

      ohhhh i had hidden the post im tard

    78. Post 108186
      Anonymous
      No.108186

      >>108169
      stopped keeping up with image generating AI progress a while back, is that with stable diffusion 1.5? because I thought that one was gimped to not be effective at sexy stuff. Also the hands and teeth look less flawed than I remember

    79. Post 108202
      Anonymous
      No.108202

      >>108186
      It looks like a fork to be good at softcore porn. AI gravure. AIグラビア. Regardless, I think its amusing that the cleavage generation varies from a little peak all the way to raunchy half nakedness. Also, the teeth are good but not crooked enough to be realistic

    80. Post 108203
      Anonymous
      No.108203

      >>108181
      Thats a little hilarious that it wasnt aware with all the hysteria around grooming

    81. Post 108204
      Anonymous
      No.108204

      You're falling for marketing schemes once again if you believe the current models of neural networks have emergent abilities.
      https://arxiv.org/abs/2304.15004

    82. Post 108214
      Anonymous
      No.108214

      One thing I've never seen anyone talk about is how these things are humourless. Its funny in a way that it seriously responds to absurd questions, but it wouldn't hurt to have it to tell jokes when people are obviously taking the piss

    83. Post 108216
      Anonymous
      No.108216

      >>108214
      Yeah, just from looking at screencapped replies it seems so bland to me that it's sometimes annoying to read.
      Maybe someone who's used it to look up and learn about stuff can tell me how their experience has been, because so far its style is one of the main reasons it hasn't piqued my interest.

    84. Post 108225
      Anonymous
      No.108225

      >>108214
      A lot of it is just influence from how the AI is trained. It's usually taught to speak in a specific manner and given "manner" modifiers. ChatGPT is instructed to be professional and teaching, but you can (try) to convince it to speak less professionally. a lot of people who use other AIs (for porn in particular) get bot variations that give the AI a character of sorts to RP as, which lets it speak in a completely different manner using vocabulary and "personality traits" you wouldn't see from chatGPT simply because it's being explicitly told not to be like that

    85. Post 108599
      Anonymous
      No.108599
      firefox_10...png
      - 4.60 KB
      (423x86)

      I've noticed civitai's first monetization effort (that I've seen) which means they're confident they have enough of a monopoly. There really wasn't any way this wasn't going to happen since they're transferring tons of data, but I assumed it'd be sold to some venture capitalists first (or maybe it has). Models shared on 4chan tend to be of higher quality, but this site is still great due to the sheer number of things that people are uploading.
      This site will also yank, or have people remove out of paranoia/puritanism, models that can produce "young looking characters" and will even go as far as checking the prompt of every shared image to check for words like "loli", so in a way you're paying to access controversial stuff before it gets taken down.
      I'm not sure how it deals with other potentially contentious content since I haven't looked. (for reference, Midjourney blocks prompts of Xi Jinping and it's an example of why this stuff is is bad when it's centralized).

    86. Post 108637
      Anonymous
      No.108637
      youtube/Si..
      - (720x420)
      https://youtu.be/Si_mGxIzHlU
    87. Post 108640
      Anonymous
      No.108640
      [MoyaiSubs...jpg
      - 229.93 KB
      (1920x1080)

      >>108637
      Is a tiktok of a guy waving his head around to maintain the 50 second attention span of teenagers really a "/qa/ thought"?

    88. Post 108830
      Anonymous
      No.108830
      [SubsPleas...jpg
      - 420.75 KB
      (1920x1080)

      hehehe
      https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html
      https://www.theverge.com/2023/5/27/23739913/chatgpt-ai-lawsuit-avianca-airlines-chatbot-research

      A lawyer decided to use ChatGPT to find legal precedent and chatgpt made up cases that didn't exist and the lawyers presented them to court. It didn't go over too well.
      It's pretty amazing that people can be this dumb.

    89. Post 108831
      Anonymous
      No.108831

      >>108640
      Do you really not recognize Hank Green?... I would have thought everyone would have a passing familiarity with him and his brother, John, from their YouTube PBS series like SciShow and Crash Course. Not to mention, even if you wouldn't recognize them from YouTube, John Green is pretty well known from his book The Fault in Our Stars.

    90. Post 108832
      Anonymous
      No.108832

      the fault in our czars

    91. Post 108838
      Anonymous
      No.108838

      I saw it

    92. Post 108839
      Anonymous
      No.108839

      >>108838
      I felt too mean

    93. Post 108840
      Anonymous
      No.108840

      >>108839
      It was alright

    94. Post 108845
      Anonymous
      No.108845

      who

    95. Post 108846
      Anonymous
      No.108846

      >>108845
      hank the science guy

    96. Post 108967
      Anonymous
      No.108967
      youtube/-g..
      - (720x420)
      https://www.youtube.com/watch?v=-gGLvg0n-uY
    97. Post 110382
      Anonymous
      No.110382
      1395417094...gif
      - 363.21 KB
      (418x570)

      On recent reflection and careful consideration about what to use for text AI models, I came to the conclusion that the biggest leap for AI will come once we can intermix the text-based models with the AI models to provide a sort of "computer vision" as to what the AI is imagining as it generates a text scenario.

    98. Post 110488
      Anonymous
      No.110488
      explorer_Q...png
      - 6.13 KB
      (272x192)

      As a reminder to people like me messing around with a lot of AI stuff: All the python and other packages/scripts/whatever that get automatically downloaded are stored in a cache so you don't need to re-download them for future stuff.
      HOWEVER, they are quite large. My drive with my python installations on it is also used for games and windows, and I freed up... THIRTY FREAKING GIGABYTES by cleaning the pip cache.
      You open the GIT bash thing and then type "pip cache purge".

      For me in windows the cache was located at users/[name]/appdata/local/pip
      There's a whole bunch of folders in there so it's really not feasible to delete them individually.
      Here's a folder for example: pip/cache/http/7/c/5/9/a

    99. Post 110832
      Anonymous
      No.110832
      youtube/_t..
      - (720x420)
      https://www.youtube.com/watch?v=_ts21nsWwoo

      Not allowed to use public models to generate AI art on steam
      But if you're a huge company who owns the rights to all the artwork of creators in house, then go ahead, you're free to do it

    100. Post 110919
      Anonymous
      No.110919
      waterfox_J...png
      - 169.57 KB
      (942x546)

      Take it with a grain of salt, but if it's anywhere near to being true then it's pretty crazy. The training stuff is getting more and more efficient as the image itself shows, but is it really possible to actually have 25,000 A100 GPUs???
      And one of the emerging patterns with all this stuff is that the stuff that gets opened up via leak ends up becoming significantly more efficient and powerful. It makes me wonder what kind of stuff would be going on if GPT4 was directly leaked somehow.

    101. Post 110920
      Anonymous
      No.110920

      >>110919
      https://ahrefs.com/ they pay 40,000,000 per year on server costs to run their tools with revenue of 200,000,000... Apparently. So if it's buisness critical yes

    102. Post 110931
      Anonymous
      No.110931
      youtube/XA..
      - (720x420)
      https://www.youtube.com/watch?v=XAbX62m4fhI

      context

    103. Post 110934
      Anonymous
      No.110934

      >>110832
      >>110919
      >>110931
      The /secret/ poster

    104. Post 110936
      Anonymous
      No.110936
      [SubsPleas...jpg
      - 238.34 KB
      (1920x1080)

      >>110934
      I attached an image about the training data of GPT-4 and gave a few sentences of my own commentary, I didn't just dump a youtube video

    105. Post 113386
      Anonymous
      No.113386
      testicle s...mp4
      - 10.41 MB
      (1920x1080)

      I've had AI Family Guy on the second monitor almost constantly for the past few days because it's so funny. I thought it would take a while before AI could be funny reliably, but whatever they did with this was successful. Unfortunately, it seems like I'd have to join a discord to get any information, so I don't have any knowledge of how it's working.
      Once in a while I notice one of the newer GPT's "actually let's not do that, it's offensive" responses, but for the most time it's successfully entertaining as it bypasses its lame safety filters with explicit text and voice.
      There was an "AI Seinfeld" a few months ago, but it was entirely random and had pretty much no entertainment value. This one, though, people feed it a prompt (you need to be in their discord...) and the characters will react to it and say funny things. The voices are synthesized very well, although they'll stutter and lock up for 5-10 seconds now and then, but it's honestly pretty hilarious when it happens. Chris's voice is noticeably lower qualtiy and louder, which is strange, but the others are such high quality that it's like it's a real voice.
      I can't really post most of the stuff on kissu because it's so offensive. It reminds me of 00s internet. Some of the prompts are like "Carter ranks his favorite racial slurs" so, you know...
      Really, it's the amazing voice synthesis that does the heavy lifting. The way it actually infers the enunciation for so many sentences and situations is amazing. I assume it's using that one 11 labs TTS service, which is paid.

      My only complaint is that they have them swear WAY too much. It's funny at first, but ehhh...

    106. Post 113395
      Anonymous
      No.113395
      7c06sialuo...png
      - 79.81 KB
      (224x225)

      as an artist am kinda split on the issue, although i am worried at some aspects of AI, i am guilty about using it for my own pleasure, the thing that's driving me crazy about it is that people won't view art seriously anymore, that it will be taken for granted, replacing the pen and paper with text and algorithms and to make matters worse is that capitalism will use it to it's advantage, seeing as nothing more then a money making machine and exploiting the living shit out of it

      but then again i can get all the AI patchouli drawings so am basically part of the problem myself lol

    107. Post 113518
      Anonymous
      No.113518

      How come people talk about a runaway explosion in AI intelligence, the singularity, but they never say the same about people? Surely if AI can improve itself, our brains are smart enough to improve themselves too?

    108. Post 113534
      Anonymous
      No.113534

      >>113518
      somehow i expect the opposite to happen

    109. Post 113919
      Anonymous
      No.113919
      1695227140...jpg
      - 139.44 KB
      (1080x566)

      One of the unexpected things is seeing Facebook, er "Meta" taking the open source approach with its text models. There's no question that OpenAI (ChatGPT) has a huge lead, but after seeing all the improvements being made to Llama (Meta's AI model) from hobbyists it's easy to see that it's the right decision. We as individuals benefit from it, but it's clear that the company is enjoying all the free labor. Surely they saw how powerful Stable Diffusion is due to all the hobbyists performing great feats that were never expected.
      I don't trust the company at all, but it can be a mutally beneficial relationship. Meta gets to have AI models that it can use to attempt to stay as a company rivaling governments in terms of power and hobbyists get to have local RP bots free from censorship.
      Meta has bought a crapload of expensive nvidia enterprise-level GPUs and it will start training what it expects to compete with GPT4 early next year, and unlike GPT4 it won't take very long due to all the improvements made since then.
      https://observer.com/2023/09/chan-zuckerberg-initiative-ai-eradicate-diseases/

    110. Post 113920
      Anonymous
      No.113920

      >>113919
      Zuck is interesting. Oddly, he's probably the one tech CEO I find somewhat endearing. I'm kind of glad he's retained majority control of Facebook/Meta. I can't see the bean counters at a company like Microsoft or Apple seriously putting any effort into bleeding edge stuff like VR or text models the same way that Facebook has. I could very easily imagine a Facebook/Meta without Zuck turning into a boring, faceless conglomerate with no imagination like Google.

    111. Post 113928
      Anonymous
      No.113928
      brian.jpg
      - 411.17 KB
      (1021x580)

      >>113920
      so freaking weird to see zuck not being public enemy number one any more
      maybe it was the one two punch of elon rocketing up to the spot while zuck challenged him to a wrestle

    112. Post 113930
      Anonymous
      No.113930

      >>113920
      If Zuck worked in government and beefed up state surveillance/censorship to the level of facebook and instagram you would call him a rights abusing tyrant

    113. Post 113931
      Anonymous
      No.113931

      >>113930
      would that be wrong

    114. Post 115564
      Anonymous
      No.115564
      R-16984814...png
      - 826.97 KB
      (791x1095)

      chatgpt 4 now lets you upload images. tested out this one
      >Hmm...something seems to have gone wrong.

    115. Post 115565
      Anonymous
      No.115565

      >>113928
      Zuck and Bezzos are people who only really care about the bottom line, but you can find their devotion to money at least relatable. Meanwhile Musk or the former president or Henry Ford are people who want to craft society around them.

      Pick your battles or so they say

    116. Post 115567
      Anonymous
      No.115567

      >>113930
      That's not really a fair comparison.
      The government sets the absolute bare minimum level of censorship that every discussion platform must abide by, with the owners of those platforms then adding additional rules to fit it's purpose. There's nothing inherently tyrannical about an individual platform having strict censorship, since it is merely the set of rules that users agree to follow, and if they dislike those rules then they are free to either not use the site or only use it to discuss some topics and then use other platforms for other topics. State censorship, on the other hand, cannot be opted out of and encompasses all discussions, and so much more readily infringes on one's rights.
      Nor does how one applies censorship to a website have any bearing on how they'd act in government - if the owner of a small hobby forum bans discussion of politics due to it being off-topic and causing drama, that obviously doesn't mean they object to all political discussion nationwide.
      And while surveillance is more insidious, as it is hard to know even to what extent you're being watched, let alone be able to clearly opt out, there is still a huge difference between surveillance with the goal of feeding people targeted ads and engaging content, and surveillance with the goal of sending people to jail. Both can infringe on one's rights, but only the latter is tyrannical, since corporate surveillance is merely for the sake of maximizing profit rather than for political control.

    117. Post 115568
      Anonymous
      No.115568

      >>115567
      >it is hard to know even to what extent you're being watched
      They tell you.

    118. Post 115569
      Anonymous
      No.115569

      you're being watched

    119. Post 115570
      Anonymous
      No.115570
      1510155229...jpg
      - 32.80 KB
      (211x322)

      >>115569

    120. Post 116231
      Anonymous
      No.116231

      Not sure if this is the right thread to talk about it or not, but those kuon animations in the sage thread really seem like a step up from that "mario bashing luigi" one that was posted here a while back.

    121. Post 116234
      Anonymous
      No.116234
      00002-2354...mp4
      - 1.70 MB
      (576x768)

      >>116231
      It's the right thread.
      I think it's advanced quite a bit (and yeah that was also me back then).
      I'm still learning about it so I haven't made any writing about it yet. There's a few different models and even talk of LORAs, so it's definitely going places.
      I believe the reason this works is because of ControlNet which was a pretty major breakthrough (but I'm too lazy to use it). It's been known that ControlNet has a benefit to this animation stuff, but I didn't feel like looking into it until now. The way it works is that it uses the previous frame as a 'base' for a new one, so stuff can be more consistent but still not good enough to be useful (I think). There's something you can follow with your eye so that means a lot.

    122. Post 116251
      Anonymous
      No.116251

      Sam Altman has been booted from OpenAI:
      https://www.nytimes.com/2023/11/17/technology/openai-sam-altman-ousted.html
      https://www.theguardian.com/technology/2023/nov/17/openai-ceo-sam-altman-fired

      I'm not sure what to make of it. He's been the CEO and the face of the company, so it's a major surprise. The business world is cutthroat and full of backstabbing and shareholder greed and all sorts of other chicanery from human garbage so who knows what would cause this to happen. Maybe it's deserved, maybe it's not. I can't see this as anything other than damaging to the company since it lays bare some internal conflict.

    123. Post 116252
      Anonymous
      No.116252

      >>116251
      >"he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities"
      Hmmm, neither these quotes nor the articles themselves explain much. Hard to comment on, really.
      Also here's the link the NY Times one in case anyone else in paywalled: https://archive.is/8Ofco

    124. Post 116287
      Anonymous
      No.116287
      1600522969...jpg
      - 221.16 KB
      (1920x1080)

      I know that most of the current LORAs and kissu models of current are all on SD 1.5, but what do people think about the other available models that use XL? I know that 2.1 was a flop, but doing a bit of research into it, people seem to have been comparing SDXL to Midjourney and Dall-E. Is it still too young to warrant a reason to switch over since it lacks community development, or does it have the same issues as 2.1 in that it's not as good for the type of content that kissu usually prefers? Since I think a model that understands context better and can more accurately represent more complex ideas would be a great step in the direction of having a model that everyone would want.

    125. Post 116342
      Anonymous
      No.116342
      v15.mp4
      - 1022.53 KB
      (512x576)

      >>116287
      Anything above SD1.5 including XL has zero value to me for the time being because it's not going to have the NovelAI booru-scraped model leak that enables quality 2D models to easily be made and merged. Early this year a guy tried to make his own (Waifu Diffusion) and it took months of 24/7 GPU training and it wasn't nearly as good. Will someone make their own NAI equivalent for SDXL? Possibly.

      In its base form SDXL will fail to compare to Dalle because SD can't compete with the datasets and raw computational power of Microsoft/OpenAI. SD relies on the specialization of extensions and LORAs and the like, but few people are eager to move to SDXL, even if they had the hardware to do so. If I wanted to make a Kuon Lora for SDXL I simply couldn't because I don't have the VRAM and that's even if it's possible with any Frankenstein'd 2D models people may have made for SDXL by now. I think base SDXL is capable of producing nudity (unlike Dalle which tries aggressively filter it), but I don't think it's specifically trained on it so it's not going to be very good.
      I really don't know about Midjourney, but people stopped talking about it so I assume it hasn't kept up.

      We really lucked out with the NAI leak. NovelAI released an update to its own thing but it's not as good as model merges with extensions and loras and the like, but I do hear it's better at following prompts and as a general model it's probably better than a lot of merges in existence today. SDXL could become great someday, but I won't be using it any time soon. It might become better when people with 24GB becomes the norm instead of the top end of the scale.
      Speaking of VRAM, it really does limit so much of what I can do. I'm really feeling it when attempting animation stuff. Another "wait for the hardware to catch up" scenario. 4090 would help, but even its 24gb of VRAM will hit the wall with animation.

    126. Post 116360
      Anonymous
      No.116360
      grid-0030.png
      - 5.54 MB
      (2592x2304)

      >>116251
      https://www.theverge.com/2023/11/20/23968829/microsoft-hires-sam-altman-greg-brockman-employees-openai
      Looks like Microsoft hired Sam Altman. Microsoft already heavily funded/partnered/whatever with OpenAI so I'm not sure what will change now. If this was something already in the works, however, then it would explain him getting fired.
      Still seems like a mess that lowers people's confidence in the company.

      I've been messing around more with some image gen stuff. It seems there's an experimental thing to lower generation time by half, but it's not quite there yet as it hits the quality kind of hard. It's called LCM and it's not included in regular SD. You need to download a LORA and also another extension that will unlock the new sampler. I learned of this by coincidence because said extension is the animation one I've been messing with.
      You can read about some of this on the lora page on civitai: https://civitai.com/models/195519/lcm-lora-weights-stable-diffusion-acceleration-module

      I was able to generate this grid of 6 images (generation and upscale) in 42 seconds on a 3080 which is pretty amazing. That's roughly the same as upgrading to a 4090. There's definitely some information lost in addition to the quality hit, however, as my Kuon lora is at full strength and it's failing to follow it. This shows amazing promise, however, as it's still in its early experimental phase.

    127. Post 116361
      Anonymous
      No.116361
      plz.gif
      - 1.69 MB
      (450x252)

      >>116360
      That's pretty big news. The video I was watching earlier suggested this could cause a lot of the people at OpenAI to resign and follow him.

      Hopefully this causes a shakeup within OpenAI and through one way or another they end up releasing their "unaligned" ChatGPT and Dalle models publicly.

    128. Post 116365
      Anonymous
      No.116365

      >>116361
      The thing is I don't think Sam Altman is actually involved with any tech stuff. I think he's like Steve Jobs; people associate him with Apple because he likes to hear himself talk, but he's just a businessman/investor/entrepreneur that is unremarkable aside from his luck and/or ability to receive investment money. The Wozniak equivalents are still at OpenAI (or maybe they left already at some point) as far as I'm aware.
      It's possible that he's friends with those people and maybe that could influence things?

    129. Post 116366
      Anonymous
      No.116366

      I saw it again

    130. Post 116368
      Anonymous
      No.116368
      [Serenae] ...jpg
      - 216.25 KB
      (1920x1080)

      Apparently a bunch of people may end up quitting OpenAI including those in important positions. This could be extremely damaging to the company and make other companies like Microsoft even more comparatively powerful when they poach the talent. I need to sleep, but today is going to be quite chaotic.
      Wouldn't it be funny if the leading LLM company implodes over stupid human stuff?

    131. Post 116369
      Anonymous
      No.116369

      >>116368
      Would be, I eagerly await that happening and then a rogue employee doing what >>116361 said

    132. Post 116384
      Anonymous
      No.116384

      >>116360
      How is SD compared to this time last year? I messed around with it about a year ago but it was kinda boring so I moved on to other things. Getting better at producing art without computers seemed like a better use of my time. But I'll admit AI waifu generation is great for rough drafting characters and what-not.

      Even with a 980ti I was managing to generate stuff in a timely fashion. Do the gains apply to those older model graphic cards to? I haven't been able to grab anything since the GTX980 generation. Prices are too high and supplies too thin. I almost bought a new graphics card last year but they were all bought within seconds of new stock coming in. I'm not paying some scalping faggot double MSRP for something that should be going for 1/4th of the price.

      All this AI shit was pushed by shell companies from the start. That's how IT in the west works. You set-up a stupid "start up" shell corporation so early investors and insiders can get in before a public offering. Then you go public and run up the price of the stock. Then they absorb it into one of the big four existing /tech/ companies. They fire almost everyone at that point and replace them pajeets and other diversity hires that don't know enough to leak anything worthwhile.

      You're getting to play with the software on your local machine because they wanted help beta testing it. Once it's good and finished they'll start requiring you to access their cloud/server farm and make you pay for computer. They'll integrate the various machine learning algos together and censor them so they won't generate or display anything deemed problematic. In time you'll have software similar to Blender for shitting out low quality works of anime, cartoons, movies and other forms of "art" coming out of the MSM.

      What I'm waiting for is someone to combine Miku with machine learning. Then I could produce entire songs without any work. I could also use the software for all my VA needs. I'm surprised it isn't a thing yet.

      This software is being hyped up for several reasons but the main one right now is that it's keeping the price of consumer GPUs so high. GPUs haven't really improved in any meaningful way for almost a decade now. But Nividia is still able to claim they're advancing at this amazing rate on the hardware side because some new software outside of gaming came along to sustain the hype train. Games haven't advanced in 15+ years thanks to everyone using the same two crappy engines. So they couldn't drive hype like that anymore.

    133. Post 116403
      Anonymous
      No.116403
      [SubsPleas...jpg
      - 293.77 KB
      (1920x1080)

      >>116384
      Please keep the discussion about the technology itself and adapt your vocabulary to that of an old-fashioned friendly imageboard instead of an angsty political one. A post like that (parts of it specifically) is liable to get deleted as well, FYI. Consider this as friendly "please assimilate to kissu's laid back atmosphere instead of bringing 4chan/8chan here" advice.

      There's been various improvements in efficiency since then. I'm just a user of this stuff so I don't know the stuff that goes on under the hood, but speed and VRAM usage has definitely become more efficient since then. It was early 2023 when, uh, Torch 2.0 gave a big boost and there's probably been some other stuff going on that I don't know. There's also stuff like model pruning to remove junk data to cut model sizes down by 2-4gb which makes loading them into memory cheaper and allows more hoarding.
      I've recently moved to a test branch that uses "FP8" encoding or something which I honestly do not understand, but it loses a slight amount of "accuracy", but is another improvement in reducing the amount of VRAM used for this stuff. Right now everyone uses FP16 and considers FP32 to be wasteful. It looks to be about a 10-20% VRAM shave which is very nice. You need a specific branch, however, the aptly named FP8 one: https://github.com/AUTOMATIC1111/stable-diffusion-webui/tree/test-fp8

      The bad news is that a lot of the cool new extensions like ControlNet are total VRAM hogs. Part of the reason I never use it is that I'd rather gamble and create 40 regular images in the time I could make 4 ControlNet ones. (that time includes setting up the images and models and so on)

    134. Post 116405
      Anonymous
      No.116405

      >>116384
      that's awfully depressing for something people are having fun with

    135. Post 116476
      Anonymous
      No.116476
      [SubsPleas...jpg
      - 392.39 KB
      (1920x1080)

      https://www.reuters.com/technology/sam-altman-return-openai-ceo-2023-11-22/

      The OpenAI/Microsoft brouhaha is over with the usual business treachery and power struggles resolved for now. Altman is back after a bunch of employees threatened to quit. There's been a change of the board or something so presumably it's all people loyal to him now. I read theories that it was the board's last desperate attempt to retain some power, but it failed utterly and now Altman has full control.
      I don't care about this kind of thing since it's just normal greedy monster stuff that's a regular part of the business world, with none of the named people actually involved with the technology, but as it concerns ChatGPT and LLM stuff it seems like there's not going to be any changes from this that we'll know about. It's kind of depressing that all these rich "entrepreneurs" are who we know instead of the people actually creating the breakthroughs, but I guess there's nothing new there. Steve Jobs invented computers and Sam Altman invented LLMs.
      I read some people say it might be a loss for AI ethics or whatever, but I sincerely do not think anyone actually cared about that stuff. Those people would have left the company after it went closed source years ago and partnered with Microsoft and such. Those so-called ethical people became Anthropic, who created a model named Claude that was so infamously censored that its second version performs worse than the first in benchmarks. But, Amazon bought them and now you can do whatever you want with it since they got their money.
      So... yeah, nothing has changed. I hope local stuff gets better because I still don't want to rely on these people.

    136. Post 116483
      Anonymous
      No.116483

      Ai chat models love to recommend books that do not exist. Why is it so bad with books specifically

    137. Post 116507
      Anonymous
      No.116507
      Utawarerum...jpg
      - 163.57 KB
      (1920x1080)

      >>116483
      It's not exclusive to books. It's referred to as a "hallucination" in which it will confidentially list things that don't exist. There's a story from months ago when some idiot lawyer asked it for legal advice and he used it to cite precedent from court cases that never happened. I'm sure lots of kids have failed assignments for similar reasons.
      People are prone to thinking it's truly intelligent and rational instead of effectively assembling new sentences from a large catalog of examples. The huge reason why text LLM can work is because it doesn't automatically go with the best possible word, but will instead semi-randomly diverge into other options. I think the degree of randomness is called temperature?

    138. Post 120446
      Anonymous
      No.120446

      I think that when it comes to using AI for improving video quality, those 4k AI upscales of anime do a pretty good job when there's no quality alternative (60 fps is still garbage)

      For the most recent example I was able to find of a massive upgrade that far outpaces the original video source, I was looking at the OP for Dancouga Nova. Every video source for it looks more or less like Juusou Kikou Dancouga Nova OP ([embed]), high in artifacts or noise and extremely low res, so it looks like ass when on fullscreen (I checked the DVDs). However looking at the AI upscale, 獣装機攻ダンクーガノヴァ OP 鳥の歌 AI 4K 中日字幕 (MAD·AMV) (回憶系列#160) ([embed]) , one can see a massive improvement if they were to view it in fullscreen on a 4k monitor. The one drawback seems to be that there's a bit of blobiness in some areas but in most every other way it beats the original. In fact I'd say that AI upscaling does a much better job on average from what I've seen compared to all the kuso upscaled BDs that anime companies have shat out for older stuff.

    139. Post 120470
      Anonymous
      No.120470
      [Pizza] Ur...jpg
      - 451.04 KB
      (1920x1080)

      >>120446
      Yeah, that's not bad. I think the term "AI" is abused a bit much and this is really just a good upscaler. I guess if something like waifu2x is considered AI then this is too, huh. It's all about the denoising to make it 'crisp' and yet not create blobs. It's not like you're getting new information, just clean up the artifacts.

      In other news, tumblr, the company that famously killed itself in a day by banning porn leading to an exodus of artists is now going to sell all of its user data to OpenAI/Microsoft. The data stretches back to 2013 so while various stuff was deemed too evil to be on tumblr it's good enough to be sold.
      https://www.theverge.com/2024/2/27/24084884/tumblr-midjourney-openai-training-data-deal-report
      https://www.engadget.com/tumblr-and-wordpress-posts-will-reportedly-be-used-for-openai-and-midjourney-training-204425798.html

      This AI stuff is really getting ugly.

    140. Post 120487
      What's your phone wallpaper?
      No.120487
      1703019150...png
      - 1.15 MB
      (1444x1317)

      >>116507
      There was a pretty funny incident around a year ago in my country.
      Here, national universities don't have entrance exams, instead you get a final exam at the end of high school and you need to pass that exam if you want to enter any uni. So the time of the exam is flipped from start of uni to end of high school and everyone across the whole country does the same exam (for math and literature at least, med school has another exam for med stuff, etc.)

      Anyway, last year, in the literature exam, there was some question about the plot of a book that's mandatory reading, and the question asked you to write the answer in words, so it wasn't just circling the correct answer. And what happened is that several thousands students all wrote the exact same incorrect answer, word for word. They all used chatgpt, of course, probably with a similar prompt and it gave everyone the exact same sentence.
      It was a huge scandal and it was pretty fun listening to literature professors' reactions. Apparently they'll be upping the security on checking phone usage during the test this year, but I'm expecting something similar to happen again lol

    141. Post 120501
      Anonymous
      No.120501
      1585150848...png
      - 29.11 KB
      (186x183)

      When I was young and got my first computer, even a little bit before that, I always had an infatuation with the idea of chatting with an AI. I narrowly avoided turning out an Applefag, I asked for an iPhone for one of my birthdays exclusively because of siri. It was too expensive at the time so I was spared an unfortunate alternative future, but I know for sure I'd be talking to my phone for hours on end even if it's a bad facsimile of the real deal.
      I'm pretty happy with how things are going these days, to say the least. Lots of people are throwing around doomsday scenarios about how the hidden shadow elite will cull humanity using magic lizard methods activated via G5 or something, but I don't really care if AI is going to have a negative impact on society. I'm just content I get to actually try out a childhood dream I had, even if I grew out of that fascination over the years.

    142. Post 122390
      Anonymous
      No.122390

      >>108214
      dam!

    143. Post 122391
      Anonymous
      No.122391
      [Serenae] ...jpg
      - 365.30 KB
      (1920x1080)

      >>122390
      hehehe
      For those unaware, the go-to joke for GPT3 was "What did the fish say when it hit a wall?" or however it went.
      That 2023 is no longer entirely true, although it's up to opinion. Claude3 is pretty good at humor stuff and it makes you wonder where it's scraping the data from (there's obviously lots of 4chan and forum stuff). It's a weird situation because it can't actually be novel since it's an LLM and an important thing about humor is novelty. Basically it's funny to you as long as the data it's referencing isn't directly known to you.

      I'll be able to show some examples soon, I think...

    144. Post 122489
      Anonymous
      No.122489
      e1dbc920aa.png
      - 428.88 KB
      (1300x1265)

      Some people said that Claude isn't good at coding stuff, but I like it's tech analysis more

    145. Post 122490
      Anonymous
      No.122490
      12f6f99659.png
      - 170.03 KB
      (1324x418)

      >>122489
      Haiku

      <- Opus

    146. Post 122492
      Anonymous
      No.122492

      I wonder how I could feed it some information about newer tech problems I have a hard time to understand and digest them to being youtuber-tier

    147. Post 124639
      Anonymous
      No.124639
      20240519_0...jpg
      - 438.66 KB
      (1922x2048)

      While learning JP to read VNs/Manga and watch anime in harmony is that what I post while doing so will be inaccessible to a fair bit of people. So seeing the advancements of AI in making translation more real time and convenient makes me happy and hopeful that we'll have on the fly OCR translations in the future people that don't know jp can use.

    148. Post 124640
      Anonymous
      No.124640
      [SubsPleas...jpg
      - 227.71 KB
      (1920x1080)

      >>124639
      Yeah, "live" OCR stuff is nothing new and people have been doing it for nearly a decade now, but having far faster stuff that's also a bit better (but still not great, contextual language and all that stuff) is really quite amazing. I didn't stop the Nosuri playthrough I was doing because of the translation, but because of the font being unreadable with OCR...
      Well, maybe AI OCR stuff will progress, too. I don't think I could get away with sending GPT4-O thousands of screenshots without paying

    149. Post 124679
      Anonymous
      No.124679

      >>124639
      >>124640
      I like the AI/live-OCR stuff but I worry about people using it to churn out lazy translations they don't bother to check. We're already seeing a lot of that and now some companies are trying to cash-in.

      But I think it would be a very valuable tool for learning a second language. As long as it doesn't teach you bad habits. What I'm really looking forward to is live-speech translation improving. Picking up kana and some basic kanji didn't take me that long. But learning how to speak like a native speaker and being able to understand a native speaker are a very different matter. Especially when you do not have access to one IRL to practice with. Even then they're usually speaking slow and not teaching you certain words and concepts (like internet slang). No Japanese teacher in an institution of learning is going to cover subjects like common otaku slang or curse words.

      Then there is the issue of dialects. You could spend years learning one dialect and be totally unable to understand someone speaking the language in a dialect common just 1 hour outside of the major cities. The main barrier I had learning how to speak basic Japanese was the fact that our teacher couldn't understand our local English dialect well and we could barely understand her Engrish. Every lesson was incredibly frustrating especially with a class room of idiots making fun of her daily.

    150. Post 129577
      Anonymous
      No.129577
      SUCKS.jpg
      - 258.58 KB
      (1280x720)

      I THINK AI SUCKS

    151. Post 129579
      Anonymous
      No.129579

      >>129577
      agreed

Top
New Reply
  1. D
  2. -
  3. B
  4. I
  5. H
  6. *
  7. U
  8. C
  9. J
  10. L
  11. G
  12. P
  13. A
Create a drawing S Turn Preview On
New Reply