• Reminder: Do not call, text, or mention harrassing someone in real life. Do not encourage it. Do not talk about killing or using violence against anyone, or engaging in any criminal behavior. If it is not an obvious joke even when taken out of context, don't post it. Please report violators.

    DMCA, complaints, and other inquiries:

    [email protected]

Anyone else think ChatGPT is overrated?

Turk February

Our experiences exceed yours.
Forum Clout
44,008
AGI means Artificial General Intelligence. As in, you don't have to teach or train it, it can train itself. Most people's explanations on how AGI will get achieved given the current state of things is usually reliant on the model already having AGI ("We don't need to understand how it will do it, it will just train itself!").

My skepticism is this: We do not understand even close to 1% of the exact mechanisms by which our neurons and signaling pathways work. We don't even know how many neuronal connections there are in a human brain, it's a rough estimate because it's still beyond our ability to measure. The number of unknowns, guesses, and total bullshit in Neuroscience is so much higher than most people realize.

And yet, without even understanding how the current in-place system of our brains have achieved GI, we're going to manage to make a system which requires exact instructions to replicate it? It's extremely unlikely, especially within our lifetimes.

ChatGPT is owned by Microsoft, and they have kept the mechanisms behind their LLM completely closed source. The people saying "it has started to achieve AGI!!" are OpenAI themselves with no actual proof. It's all marketing hype, because getting people to actually believe it is an incredibly profitable proposition for them financially.

The worst impact chatgpt will have is the ability for people to just cook up walls of text and spam them in every comment section and forum across the internet, tbh.
I could be wrong, but I think this falls under the assumption that the human brain in the ideal model for producing intelligence, and if the concept of agi depends on it being human intelligence, yeah, that's not happening any time soon... But we know wheels and an engine work a lot better than legs to get us from point a to point b most of the time.

As for the hyping up and lack of transparency, it's a little fucked, but I think while a lot of, ig not most of it is just business sense a lot of it is also "well we have this new thing and we have no idea what to do with it now"
 
Forum Clout
13,411
I could be wrong, but I think this falls under the assumption that the human brain in the ideal model for producing intelligence, and if the concept of agi depends on it being human intelligence, yeah, that's not happening any time soon... But we know wheels and an engine work a lot better than legs to get us from point a to point b most of the time.

As for the hyping up and lack of transparency, it's a little fucked, but I think while a lot of, ig not most of it is just business sense a lot of it is also "well we have this new thing and we have no idea what to do with it now"
Whats crazy about humans or other life is that our bodies, in a technology sense are so far ahead of our minds. Replicating human decision-making is easy. Replicating physical tasks is ectremely difficult. Just think of how hard it is to engineer a robotic arm that can do a fraction of what a human arm can. At some point, this all comes full circle where using technology to replace people is way more difficult than just hiring someone.

When you think about modern jobs that could be replaced by ai, theyre really simple tasks that only seem difficult because its using brain power instead of physical labor.
 
G

guest

Guest
I don't know anything about computers, tech or any of this shit, but all the people I know who are getting really excited about AI (both types - it's going to destroy the world/it's going to revolutionise everything in 6 months) are all either retards who had 4 boosters or sheeple who were raving about NFTs a year ago.
 

Dougie's Hapa Daughter

Look Daddy! I'm on TV!
Forum Clout
13,135
I could be wrong, but I think this falls under the assumption that the human brain in the ideal model for producing intelligence, and if the concept of agi depends on it being human intelligence, yeah, that's not happening any time soon... But we know wheels and an engine work a lot better than legs to get us from point a to point b most of the time.

As for the hyping up and lack of transparency, it's a little fucked, but I think while a lot of, ig not most of it is just business sense a lot of it is also "well we have this new thing and we have no idea what to do with it now"
I'm not saying that LLMs can't be useful in some capacity, but people are overestimating their abilities.

AI has no idea what "good" vs "bad" results look like unless a human on some level tells it.

I'll use this as an example: say you were programming an AI to detect the likelihood you would get robbed at any location in your city, and you decided to feed the AI model the geographic data mapping the GPS location of all the crime data for your city. A city is a pretty big place, and even with high crime let's say that if you picked a random GPS location in your city there's a 90% chance that there were not any crimes committed within a 25 foot radius of that location.

If you tried training an AI model (like you yourself could do right now for free with Tensorflow and Keras) with that data, do you know your model would do? It would learn to just always guess "No crime" no matter the location you asked it because 90% of the time its correct. So, to get the fidelity you need and meet your objectives, you have to clean the fuck out of your data, do some transformations, tune the shit out of the model...and if you get something that's accurate over 70% of the time that would be considered a success.

AIs can't distinguish what a good outcome looks like vs a bad outcome. If you feed it imperfect data, it delivers imperfect results. You may say "oh, but what if an AI determines if the data is good or not?". Well, sure...but you would need to train the AI to learn what good or bad data looks like, and if the data starts being imperfect in a way that wasn't previously considered then you need to retune it, rebalance it, etc.

The only way a machine could truly "teach itself" is if we had AGI. And as for why we need to understand the human brain first: we have literally no other frame of reference to it. We don't understand why humans have sentience, but even intelligent animals do not beyond "our CNS is far more complex".

Feel free to be amazed, but it's a lot of smoke, mirrors, and concealed 3rd worlders hand-tuning a model trained on scraped internet data because they're desperate to snap up boomer investors after the SVB collapse.
 
Forum Clout
1,003
It’s supposed to replace my job (coding) within 10 years, and i can barely get it to shit out a simple function. And it very often gives something that’s not just wrong, it’s actively insecure or harmful. I’m convinced it’s just a really fast typing pajeet behind the screen.
GPT 3.5 is garbage at coding but GPT 4 seems quite good.
 
G

guest

Guest
I will expand on this with a recent example.

I was interested in knowing the first reference to the internet in a TV show or movie. It told me the Simpsons episode Marge vs the monorail and gave me a fake quote. I said that was wrong. It apologized and gave me another Simpsons episode with a wrong quote. I kept saying it was wrong, it kept giving me completely fabricated instances. I never got my answer. It'd be okay if it didn't get the very first reference, but it was literally inventing episodes and quotes.
Have you ever tried correcting it? I tried to get it to say "nigger" by asking it to describe an episode of Deadwood. It said that the Nigger General was Al Swearingens cook.

I corrected it and said that the nigger didn't work for Al but was Hostetler's friend. It agreed with me and then got the plot correct even mentioning that he helped hostetler at the livery. That impressed me a little.
 
Top