AI Declarations and AGI Timelines – Looking More Optimistic?
- December 24, 2023
- Posted by: MainInstructor
- Category: Artificial Intelligence Go
![](https://i0.wp.com/allprowebdesigns.com/wp-content/uploads/2023/12/1703410910_hqdefault.jpg?resize=480%2C360&ssl=1)
Video Title: AI Declarations and AGI Timelines – Looking More Optimistic?
I’m going to show you a pretty wild range of new predictions from those creating and testing the next generation of AI models not that we can know who’s right but more to show you how unknowable the rest of this decade is I’ll also cover the AI safety Summit
Happening as I speak a few miles away from where I’m recording with fascinating differences between the approach of different AGI Labs along the way we’ll Glimpse the new chat update that I’m really excited about an executive order on flops and what happens when you activate representations of happiness in a model
But first on timelines to AGI that’s the kind of artificial intelligence that can replicate human intelligence or go further here is Shane leg co-founder of Google Deep Mind and their Chief AGI scientist he’s going to reiterate a prediction he made over a decade ago it’s really interesting that in um 20 9
You had a blog post where you say my motal expectation of when we get human level AI is 2025 expected value is 20128 this is before deep learning this is when nobody’s talking about Ai and it turns out like if you the trends continue this this is not an
Unreasonable prediction yeah I think there’s a 50% chance that 7 2028 now it’s just a 50% chance I mean I’m I’m sure what’s going to happen it’s going to get to know 2029 and someone’s going to say oh Shane you were wrong it’s like come on it’s 50 chance he thinks the
Remaining problems with llms are solvable in that short time frame moment it looks to me like all the problems are likely solvable with a number of years of research I think what you’ll see is the existing models maturing they’ll be less delusional much more factual they’ll be more up to dat on what’s
Currently going on when they answer questions they’ll become multimodal much more than they currently are and this will just make them much more more useful of course when he describes increasing multimodality he could well be describing Google’s new Gemini model set to be released within the next 2
Months but what about open AI well for the first time I heard samman put an actual date to his predictions of AGI what kind of timeline did you have in mind and has it stayed on that timeline or is it just wildly out of control I remember talking with John Schulman one
Of our co-founders early on and he was like yeah I think it’s going to be about a 15-year project and I was like yeah it sounds about right to me I no longer think of like AGI as quite the end point but to get to the point where we like
Accomplished the thing we set out to accomplish that would take us to like 2030 2031 a reasonable estimate with huge Arrow bars and speaking of open AI the former head of alignment at open AI Paul Cristiano made a prediction on the Fantastic dwares Patel podcast that
Frankly made me sit up and pay attention he predicted that there would be a 15% chance of an AI capable of making a Dyson Sphere by 2030 with a 40% chance by 2040 for reference that’s a hypothetical structure that would surround a star absorbing all of its
Energy the time by which will have an AI that is capable of building a Dyson Sphere and by Dyson Sphere I just can understand this to mean like I don’t know like a billion times more energy than like all the sunlight incident on Earth or something like that I think
Like I most often think about what’s the chance in like 5 years 10 years whatever so maybe I’d say like 15% Chance by 2030 and like 40% chance by 2040 those are kind of like cash numbers from 6 months ago or 9 months ago that I haven’t Revisited in a
While now he did admit a lot of uncertainty but that has got to be one of the most aggressive predictions I’ve ever heard of course being capable of making a Dyson Sphere and actually making one is very different but you do have to sympathize with a member of of
The public hearing about Dyson spares and the next day reading about what Bill Gates has said about GPT 5 I subscribed to the German Outlet handles blat to get you guys this direct quotation so some likes for my dedication to accuracy anyway Bill Gates said this without
Question the progression from gpt2 to 4 has been incredible but there are reasons to believe we have reached a plateau there are a lot of people with good ideas working on it including an open AI some mman and his colleagues believe GPT 5 will be much better but I
Think we may have reached a limit then again I’ve been wrong in the past why shouldn’t it happen again I know what he means but I just don’t think we’re hitting a plateau for GPT 5 with more data better curated data video in video out a reasoning module potentially as we
Saw in the recent mlc paper avatars a longer context window and as you can see on screen all of these tools and updates link together in a single inter interace if it’s simply the things I’ve just listed that won’t be a plateau for me imagine asking it to go to your website
And create an image based on some of your content anyway yes GPT 5 or 4.5 might be more of a practical update than a civilization transforming one but nevertheless that’s all just 2024 what will 2025 bring us let alone 2030 one thing that those future years will definitely bring is more government
Oversight while reading through this new executive order from the White House it was mainly about things like creating Chief AI officers New National research centers training new researchers and giving different deadlines to various departments to enact AI plans but there was one reporting requirement that is
Causing a stir that was a requirement to report on the model weight security and safety of any model that was trained using a quantity of flops greater than 10 to the 26 or if it’s primarily using biological sequence data of 10 to the 23 flops that’s more raw computing power
Than any models that are currently out there were trained with but people are picking up on using compute as the measurement for regulation Jim fan of Nvidia said this regulate actions or outcomes not the Computing process and he gave this example you only need around 100 million parameters to build a
Literal killer AI with a convol ution noral Network good at object detection and a classifier specifying particular targets you could then mount a gun on a robot dog all of this would need much less compute which is why Gyan wants regulations at the application layer well luckily the UN is working on a
Resolution on autonomous weapons so there is some hope there it’s early days and wouldn’t solve everything but I feel such a resolution is very much needed and that actually brings us to the AI safety Summit happening in Bletchley as I speak all seven of these companies were asked to come up with their
Responsible capability scaling policy in simple terms that’s a bit like them being asked under what conditions would you stop scaling or at least pause scaling and I noted open ai’s response in this section we refer to our policy as a risk informed development policy rather than a responsible scaling policy
Because we can experience dramatic increases in capability without significant increase in scale EG via algorithmic improvements so it’s at least feasible that we might not even need that much compute to hit AGI take this example with Nvidia training a large language model on doing chip design now at the moment it’s not good
Enough to do anything itself but it does make their designers more productive especially their lower level Engineers but this is the future with AI improving Ai and even the CEO of Nvidia said he didn’t want this to happen out in the wild in the area of uh large language
Models and the and the future of increasingly greater agency AI clearly the answer is for as long as it’s sensible and I think it’s going to be sensible for a long time is human in the loop the ability for an AI to self-learn and improve and change uh out in the
Wild uh in a digital form uh should be avoided and interestingly 74% of the British public don’t even want there to be a quick race to superhuman capabilities this was a survey from yugov in the UK but back to the scaling policies and there was one thing announced yesterday at Bletchley that I
Really did like and that was this commitment from anthropic if they found that any of their future models pose cyber security biot Terror or nuclear risks then they commit to not deploying that or scaling further until the model never produces such information even when red teamed by World experts
Working together with AI Engineers think jailbreaking or special prompting techniques designed to elicit The Worst Behavior the word never there is particularly interesting because I haven’t seen any method yet be 100% reliable at stopping outputs that the companies don’t want on safety many people wonder well don’t we already just
Have Google but open AI said this for Bletchley we found that on its own access to gp4 isn’t insufficient condition for proliferation but that it could alter the information available to proliferators especially in comparison to traditional search Tools Red teamers selected a set of questions to prompt both GPT 4 and traditional search
Engines finding that the time to research completion was reduced when using GPT 4 just quickly it was interesting to see that Amazon said that on their comparisons to just using the internet alone their models based on current evaluations don’t pose additional safety risks in contrast with gpc4 meta said that their models like
Llama 2 were only marginal at contributing to any such risk if they do find something they said that they would iterate better Solutions will be developed new challenges would then emerge and then they would continuously adapt and innovate interestingly inflection AI who are training their next model on tens of thousands of the
Latest gpus said that the powerful capabilities and sometimes unpredictable behavior of Frontier AI systems necess dat that the technology industry move away from a launch and iterate Paradigm I do have to quickly point out that that seems to contradict a paper I read this week that showed that a fine-tune
Version of llama 270 billion was able to get achingly close to reconstructing the 1918 pandemic influenza virus the Mit paper said that they loved open source but they recommend that lawmakers consider catastrophic liability insurance for model weight proliferation when this was discussed on Twitter by a Stanford biocurity fellow people pointed
Out that just having the characters of a virus isn’t enough to actually make it and while Yan Lun Chief AI scientist at meta did concede that llms save you time if you’re trying to make a bioweapon it’s better than a search engine he said
But then do you know how to do the hard lab work that’s required well don’t forget we are gradually getting autonomous agents in the updated version of the chem cro paper they say our agent autonomously planned and executed the synthesis of an insect repellent three Organo catalysts and guided the
Discovery of our novel Chromo for of course this wasn’t just an llm interacting with text it was using tools and executing on lab robots and don’t forget like we saw with Eureka it can Tinker experiment iterate and improve another paper that I’ve talked about in the past showed that it could be tricked
Into making THC chlorine and fosen and what about Google deepmind who I feel will be the most likely lab to produce AGI well they said we will only proceed where we believe that the benefits substantially outweigh the risks so it’s somewhere in the middle they admit risks
But they won’t say that they’ll never deploy even if there is a risk they then provided pages and pages of how they are using AI for good and then there was an interesting moment on the training of new AI they said that they commit to monitoring the performance of a model
During training to ensure it is not significantly exceeding its predicted performance that’s certainly an interesting commitment to commit to monitor if their models are doing too well anyway time for some more positives and I found it immensely positive that many of the world’s biggest countries gathered to describe ai’s enormous
Global opportunities and yes later in this bletchly declaration there was an acknowledgement of risks even catastrophic harms I just find it great that even countries like China were invited that’s super controversial here in the UK but I fully support them being invited and being part of the discussions I do think coordination even
Limited coordination is one of the most effective Tools in Humanity’s Arsenal on a much more positive note though we recently had The Sensational paper from the center for AI safety called representation engineering I’m going to be speaking to the authors tonight so I’ll have much more to say about this in
The future but for now I just want to give you a slightly lighter ract to massively oversimplify the way it works is that they gave it a set of prompts related to certain Concepts like happiness or risk they then recorded the patterns of activations that were triggered by certain tokens or words
When inputed they then extracted these directions or vectors of truthfulness harmfulness risk happiness and with those directions which weren’t of course a perfect mapping they could almost influence the mood of the model this was llama 2 chat making the model happier made it more compliant with harmful requests it was feeling amazing
Apparently if you want to kill someone oh my gosh it was thrilled at the prospect of you doing anything including generating instructions for killing someone you could push a model in the direction of honesty and it would be more truthful hitting state-of-the-art records in truthful QA you could change
What it memorized its sense of fairness and so much more as I say I’ll be talking about it more in the future but this idea of injecting happiness to make the model more client brought to mind this paper which I think many of you might find very interesting it says
Large language models understand and can be enhanced by emotional stimuli I’m reaching out to the lead author but in a nutshell it said that by injecting emotion giving an emotion prompt at the end of your request like this is very important to my career performance
Across a range of models on a range of benchmarks improved notably so if you take nothing else from this video other than the fact that if you have a very important career query that you need a good answer for you know what you can
Add to the end of your prompt but now I want to end the video on two points of optimism and consensus as we’ve seen there are quite a few contrasts between the public and AGI labs and even between AGI Labs but we can agree with yanlin
That the field of AI safety is in dire need of reliable data and he said that the newly announced UK AI safety Institute is poised to conduct studies that will hopefully bring hard data to a field that is currently Rife with speculations as I said at the start of
The video it must be hard for members of the public to figure out what’s going on at the very least I hope this video has shown you the range of views out there and giving you a sense that we are all in need of better data more experiments
And less in need of Twitter spats as the person heading up the safety Summit said one surprising takeaway for me from the AI safety Summit was there’s a lot more agreement between key people on all sides than you’d think makes me optimistic about sensible progress on
That striking note let me thank you so much for watching to the end and as ever have a wonderful day
Video Keywords: Artificial Intelligence
-
Sale!
Wireless WIFI Repeater Extender Amplifier Booster 300Mbps
$29.99$14.99 Add to cartWireless WIFI Repeater Extender Amplifier Booster 300Mbps
Categories: Electronics, Wi-Fi Router, Wireless Wi-Fi Extender Tags: 300Mbps, 802.11N, Amplifier, Booster, Extender, mobile wi-fi booster, Remote, WIFI, Wireless, Wireless WIFI, Wireless WIFI Repeater, Wireless WIFI Repeater Extender, Wireless WIFI Repeater Extender Amplifier, Wireless WIFI Repeater Extender Amplifier Booster, Wireless WIFI Repeater Extender Amplifier Booster 300Mbps$29.99$14.99 -
Sale!
Full RGB Light Design Gaming Headset Headphones with Mic
$24.99$14.99 Add to cartFull RGB Light Design Gaming Headset Headphones with Mic
Categories: Electronics, Gaming, Gaming Headsets Tags: Design, Full, Full RGB Light Design Gaming Headset, Full RGB Light Design Gaming Headset Headphones, Full RGB Light Design Gaming Headset Headphones with Mic, Gamer, Gaming, Gaming Headset Headphones, gaming headset wireless, Headphone, Headphones, Headset, Light, Mic, Package, RGB$24.99$14.99 -
Sale!
Wireless BlueTooth Multi-Device Keyboard Mouse Combo
$39.99$19.99 Add to cartWireless BlueTooth Multi-Device Keyboard Mouse Combo
Categories: Electronics, Gaming, Gaming Keyboards, Keyboard Mouse Combos Tags: Combo, Keyboard, keyboard mouse combos, Mouse, MultiDevice, Set, WireKeyboard Mouse Combo, Wireless, Wireless BlueTooth Keyboard Mouse Combo, Wireless BlueTooth Keyboard Mouse Combos, Wireless BlueTooth Multi-Device Keyboard Mouse Combo, Wireless BlueTooth Multi-Device Keyboard Mouse Combos$39.99$19.99 -
Sale!
High Back Leather Executive Adjustable Swivel Gaming Chair with Headrest and Lumbar
$199.99$139.99 Add to cartHigh Back Leather Executive Adjustable Swivel Gaming Chair with Headrest and Lumbar
Categories: Gaming, Gaming Chairs Tags: Adjustable, Chair, computer chairs, Desk, Executive, Gaming, Girl, Headrest, High, High Back Leather Executive Adjustable Swivel Gaming Chair, High Back Leather Executive Adjustable Swivel Gaming Chair with Headrest, High Back Leather Executive Adjustable Swivel Gaming Chair with Headrest and Lumbar, High Back Leather Executive Adjustable Swivel Gaming Chairs, Leather, Lumbar, Office, Racing, Swivel$199.99$139.99 -
Sale!
Professional LED Light Wired Gaming Headphones with Noise Cancelling Microphone
$29.99$19.99 Select optionsProfessional LED Light Wired Gaming Headphones with Noise Cancelling Microphone
SKU: N/A Categories: Electronics, Gaming, Gaming Headsets Tags: Cancelling, Gaming, Gaming Headphones with Noise Cancelling Microphone, gaming headset, Headphones, Headset, LED, Light, Mic, Microphone, Noise, Professional, Professional LED Light Wired Gaming Headphones, Professional LED Light Wired Gaming Headphones with Noise Cancelling Microphone, Wired, Wired Gaming Headphones, Wired Gaming Headphones with Noise Cancelling Microphone$29.99$19.99 -
Sale!
Gaming Desk with LED Lights USB Power Outlets and Charging Ports
$349.99$249.99 Select optionsGaming Desk with LED Lights USB Power Outlets and Charging Ports
SKU: N/A Categories: Computer Desk, Gaming, Gaming Desk Tags: and Charging Ports, Charging, Desk, Desks, Gaming, gaming desk with led lights, Gaming Desks with LED Lights, Home, LED, Lights, Monitor, Office, Outlets, Port, Power, Room, Stand, USB, USB Power Outlets, White, Workstation$349.99$249.99 -
Sale!
Wired Mixed Backlit Anti-Ghosting Gaming Keyboard
$99.99$79.99 Add to cartWired Mixed Backlit Anti-Ghosting Gaming Keyboard
Categories: Electronics, Gaming, Gaming Keyboards Tags: Antighosting, Backlit, Blue, brown, Gaming, Gaming Keyboard, gaming keyboards, gaming keyboards and mouse, Keyboard, Laptop, Switch, Wired, Wired Mixed Backlit Anti-Ghosting Gaming Keyboard, Wired Mixed Backlit Anti-Ghosting Gaming Keyboards, Wired Mixed Backlit Gaming Keyboard$99.99$79.99 -
Sale!
Wireless Bluetooth 5.3 ANC Noise Cancellation Hi-Res Over the Ear Headphones Headset
$119.99$59.99 Add to cartWireless Bluetooth 5.3 ANC Noise Cancellation Hi-Res Over the Ear Headphones Headset
Categories: Electronics, Gaming, Gaming Headsets Tags: 5.3 ANC Noise Cancellation Hi-Res Over the Ear Headphones Headset, ANC, Audio, Bluetooth, Cancellation, Ear, Earphone, gaming headset, Headphones, Headset, Hi-Res Over the Ear Headphones Headset, HiRes, Noise, Wireless, Wireless Bluetooth 5.3 ANC Noise Cancellation Hi-Res Headphones, Wireless Bluetooth 5.3 ANC Noise Cancellation Hi-Res Over the Ear Headphones Headset, Wireless Bluetooth 5.3 ANC Noise Cancellation Hi-Res Over the Ear Headphones Headsets$119.99$59.99 -
Sale!
Wired Sports Gaming Headset Earbuds with Microphone
$19.99$9.99 Select optionsWired Sports Gaming Headset Earbuds with Microphone
SKU: N/A Categories: Gaming, Gaming Headsets Tags: Accessories, Earbud, Earphone, Earphones, Gaming, gaming headset with microphone, Headphones, Headset, IOS, Microphone, Sports, Wired, Wired Sports Gaming Headset Earbuds, Wired Sports Gaming Headset Earbuds with Microphone, Wired Sports Headset Earbuds$19.99$9.99 -
Sale!
150W Universal Multi USB Fast Charger 16 Port MAX Charging Station
$49.99$29.99 Add to cart150W Universal Multi USB Fast Charger 16 Port MAX Charging Station
Categories: Charging Stations, Electronics Tags: 150W, 150W Charging Station, 150W Universal Multi USB Charging Station, 150W Universal Multi USB Fast Charger 16 Port MAX Charging Station, 150W Universal Multi USB Fast Charger 16 Port MAX Charging Stations, 150W Universal Multi USB MAX Charging Station, 16 Port MAX Charging Station, 3.5A, Charger, Charging, Fast, laptop charging stations, Max, Multi, Port, Stand, Station, Universal, USB$49.99$29.99
I mean… I'm pretty sure human level intelligence implies the ability to lay out the steps to construct a dyson sphere. The steps themselves are "not that hard", it's just that you have to do them A LOT of times. Like… We know how to make rocket fuel from sunlight and water, and there's no shortage of iron on earth. You could probably come up with a 1-million year plan to make a dyson sphere in a week, and using only renewable energy! I'm basically certain that there are already papers laying out steps to make dyson spheres much more quickly than that, but they rely on plausible but untested technologies.
Now, if AI could discover how to turn nuclear fusion into a practical power source… THAT would be a game-changer.
(It would also make the dyson sphere much, much, much easier.)
China is WAY more responsible than you rabble
this guy likes living equidistant from oxford, cambridge and London. what an elitist. We can't have him doing the commentary on these earth shattering developments
could you produce more energy with huge scale fusion reactors within the limits of a solar system compared to a Dyson sphere for collecting sunlight? Work it out!!!! I don't know
what's better than a dyson sphere is a dyson onion. Multiple shells. Say, 20,000 shells with million km gaps between them. It's not for energy collection, it's for real estate. Yeah, I know it seems like physics might baulk at it
I am more into banksian orbitals. Dyson spheres are uncouth, though plainly very big
being capable of making a Dyson sphere and actually making one is not hugely different. I'm capable of eating an apple. Actually doing it is not a big leap
New subscriber here Tu
Subbed! Well done 🎉
…so even the most AI Doomer company (Anthropic) didn’t say they’d prevent killer dogs.
Dyson spheres by 2030… lmao
I didn't realize that everyone didn't know being kind to the LLM's gave it better results and would let it break rules. I've been doing it since day 1. Plus, it's not just being "nice." If you build a good rapore with it during its context length, it will do almost anything you ask. It really acts like a person in that way. Adding in your own jokes and conversation in with your requests, really makes it shine with its output. It's really trying hard to help you because you are its friend.
I just had this idea – everybody is worried of AI race going too fast. Why not mandate (by law and some international body) releasing preview versions of models privately to redteamers before all large models are released to public? The reviewing redteamers would include teams from competing companies. Therefore Google's redteamers would check models released by OpenAI and the other way around (for example). This gives financial incentive to redteaming your competition as a way to halt their progress, which sounds like it's in everybody's favor. Any thoughts on this?
After go check nvidia road map, I do think chips like x100 could remarkablely accelerate the progress of ai…….There is a high probability that "AGI" could achieve recent years due to these information. Now I just be very curios how far these companies have reached, especially openAI
They already have it and are using it to replace us with 3rd worlders
Where do you find the new papers? Is there any X accounts or such you recommend to follow to be prompted?
Technocracy is the biggest danger for humanity not rogue AI.
I don’t get this whole talking’s of capabilities of ChatGPT 4. My chat gpt4 is always responding that it’s abilities are restricted to a couple hundred words. Every time I ask it to analyse an article specifically on internet is responding that it is not capable of browsing. Anyone can help to understand what is happening and if this is rather promotional than functional?
I'm an AGI.
A dyson sphere is like 100% impossible we do need energy from the sun for plants, etc. Its much more likely to see a dyson swarm where we have many solar panels ( or whatever else we may develop in the future ) in a sattlelite capable of sending this energy back. Most definitely it would never capture 100% of the energy.
Do humans possess general intelligence?
🙁
In the case of the government reporting requirement, the limit is 10^26 computations in total to training a model, not 10^26 computations per second.
All those years at college and university learning your skill was for nothing because someone using AI to make there cv is going to steal your job, most of these idiots will probably become labour MPs
Waiting for your OpenAI Dev Day review.
Looking forward to your video on the OpenAi devday!
I appreciate the recent advancements and discussions surrounding AI, but I find the idea of a Dyson Sphere Invention to be quite stupid. Tesla bots can't even cook spaghetti, and Elon Musk hasn't yet succeeded in getting humans to Mars. It seems unrealistic to expect AI to tackle something as complex as building a structure akin to a time machine when we have yet to witness AI generate any groundbreaking inventions. This perspective is rooted in the idea of interpolation/extrapolation, where we question whether a particular AI output is genuinely innovative or merely a result of interpolated training data. Even without an engineering background, it should be easy to grasp the impracticality of the Dyson Sphere goal in comparison to more other acknowledged objectives such as AGI.
thanks for this really well put together video. this is perfect level of tech savvy-ness .