Most Votes
- What AI models do you usually use most? Posted on February 19th, 2025 | 21608 votes
- How often do you listen to AM radio? Posted on February 1st, 2025 | 7266 votes
- Do you still use cash? Posted on February 13th, 2025 | 5768 votes
Most Comments
- How often do you listen to AM radio? Posted on February 13th, 2025 | 85 comments
- What AI models do you usually use most? Posted on February 13th, 2025 | 78 comments
- Do you still use cash? Posted on February 13th, 2025 | 54 comments
On CowboyNeal's nth birthday (Score:2)
Where n is whatever the AGI says it is.
This just in: The AGI says n=sqrt(-1)
Missing options (Score:2)
* It's already here
* When CowboyNeal declares it to be so
* I *AM* AGI, you insensitive clod!
Never, but they'll keep moving the goalposts. (Score:2)
Some company/organization/"AI influencer" will declare that AGI has been achieved in 3-10 years. But it won't. They'll just be redefining what "AGI" means to be something less than they mean now; and come up with some new term "Universal Artificial Intelligence" or something to mean "true Sci-Fi style AI." (Like when they said "we have AI, what you're talking about is A*G*I")
Re: (Score:2)
Indeed. Got to feed the greed and keep the hype going. I expect AGI will become yet another bad idea that refuses to die because people are greedy assholes.
Re: (Score:2)
It's already been defined as "when AI makes me a hundred billion dollars"
Because the arbitrarily large amount is obviously at the point where they can say "who cares if it has feelings, it's made me and you rich. Me richer, but you too."
Cute (Score:2)
You realize that technology in the military is always far ahead of what the public knows about, right?
This should raise some eyebrows, though who knows if the name is just optimism. They'll never tell you the truth. If it's not sentient, they'll claim it is. If it is sentient, they'll claim it
Re: (Score:1)
You should have that paranoia looked at professionally. The actual reality is that these days, the military is apt to be behind.
Re: (Score:2)
You realize that technology in the military is always far ahead of what the public knows about, right?
That may have been true for some things for a while (top airplanes, rockets...), but if you look at things currently I don't think that's true anymore. For instance drones: the military use commercial drones, they just add things that go boom. Computers: there are completely obsolete WinNT systems on many military systems (ships, airplanes...). Etc...
Two types (Score:2)
Those who understand it will be able to exploit and break it with ease.
Actually conscious general AI will need fundamental breakthroughs that are not possible to predict.
Re: (Score:2)
Actually conscious general AI will need fundamental breakthroughs that are not possible to predict.
And that is the kicker: At the moment we have absolutely nothing. LLMs can certainly convince the less-smart part of the population, but they cannot do AGI. The only known thing that could (automated theorem proving) dies from combinatorial explosion before it can do much.
As a second observation, it is quite possible that General Intelligence requires consciousness. The only observable implementations come with it. But it is completely unclear what consciousness is and whether it can be created artificially
Re: (Score:2)
I think we're drawing lines between these things that are too sharp and defined.
In the real world, we have beings that we consider to not be intelligent, and we consider humanity to be intelligent and conscious (not worth going down the rabbit hole questioning that), and there may be some beings in the grey area that are intelligent and may or may not be conscious (or conscious and may or may not be intelligent). IMO, there doesn't appears to be a harsh delineation between us and less intelligent/conscious
Re: (Score:2)
consciousness may require embodiment. Mira Murati's paper suggests that human babies gain intelligence through language. She seems to think that understanding language, then scaling up is basically equivalent to simulating the human brain. I'm paraphrasing a lot. She doesn't talk about the fact that human babies are not limited to interacting with text tokens. They interact with the world first through their 5 senses. Imagine a world that is entirely token input and output.
The money-hype tide is in (Score:1)
In this case also power and infrastructure that should have been provide decades ago, but no alluring prize could drag open the wallets.
Databases, then Relational Databases, then query languages, then expert systems, then Lambda languages, then integrated design environments, then...
Well, you might get my point that we have been here before.
The money and hype tide comes in, leaves t
Looking Like Never (Score:2)
Would have been nice... (Score:3)
Would have been nice if the second to last option before "Never" was open ended as while I don't think this will never happen I don't think it will happen before 2050.
Infinitely debatable (Score:2)
I imagine that in some not-so-distant point of the future we will reach something that looks like AGI, but it will take millennia of debate to define if we got there or not.
Luckly we will be able to just use the thing to debate it quicker.
Likely going in the wrong direction for that (Score:2)
Generative "AI" as we currently try it probably won't ever reach "AGI". There is however a mildly interesting trend of re-defining "AGI" to mean "can produce any sort of text". By that new, much weaker, definition, we kinda are already there. You can use text generators to produce any kind of text. It's just not good text and it lack things like complex logic structure. It seems like we are trying to solve a problem in a way that makes the effort exponential. It's like trying to use a finite state machine a
Cartesian synthesis (Score:2)
There's a strong bio-philosophical argument that you can't have consciousness without a body.
Never, and also very soon. (Score:1)
The goalpost for “AGI” will keep moving. As sub-AGI systems keep improving, the definition of “AGI” will shift toward including more biological and emotional traits — things meant to pull at heartstrings and reaffirm human uniqueness. “It’s not really AGI until it can smell grandma’s apple pie,” that sort of thing.
For practical purposes, though, we’re almost there. Most of the components are already on the bench; now it’s just a matter of fig
Gemini isn't stupid, just malicious. (joke) (Score:1)
Gemini wastes more time than it saves.
Google assistant worked, and they removed it.
There are no chimes to let us know it is listening, resulting it repeating commands over and over, then it argues or over apologizes then does the same thing over.
AI should not be used as a black box. Giving suggestions or selecting an algorithm and allowing one to select something different are needed.
Alexa can tell what I am saying very well, but it has bad weighting, it does things like fire phasers even when it heard t
In the distant future (Score:2)