. 2022-3 welcomes you to 15th annual players cards of world record jobs - how to play games version of WRJ
Health we continue to value alumni of Brilliant, Nightingale (doubly so given Ukraine situation) , the women who built a nation round last mile health care with Fazle Abed.,Abed's 21st C comrade spirit Jim Kim without whom the signature transformation of UN leader Guterres : UN2 that proacts engineering/entrepreneur/education/Servant leader smarts into any silo of old gov probably would not be with us
WorldClassDaos recommends we leap into better 2020s best place to start: HONG KONG as WorldClassEngineer laureate of 2022. While dad, norman macrae, coined term Entrepreneurial Revolution in The Economist 1969. Friends think there would be few problems in the world if every 1/1000 of humans were as energetic multi-win traders as Hong Kong, Hong Kong is leading 21st coming of age with unprecedented co-creativity geared to making sure web3 serves communities in ways no previous web 2, 1 or tele media (arguably only attenborough beat off vested interests to sustain 50 years of consistent tv storytelling access -moreover web3 has emerged out of a radical fintech foundation with concept of Satoshi 2008 intended to be a decentralised solution to serial abuse of communities by subprime banking
JOTTINGS: Nightingales deliver motion for UNGA77 .why love Stanford. (rules options) ::
top 2 alumni networks to cooperate with remain Fazle Abed & Von Neumann-; with urgent appearance of web3 as make or break sustainability generation we've spent time zooming up bop-eg Singapore Players, ..... more WRJ
Upd Fall 2023 - Worlds AI see change everyone's futures; Musk headline on need for 3rd party referee is transnational ai summit's deepest intelligent momentupd valentines 2023 ...Join us at twitterversal.com and TAO: Twitter Autonomy Opsworldclassdaosgreenbigbang invites you to have a sneak at our new picks for 2023 if you are comfy with messy searchesSDGs rising by valuing women's productivity emulating mens
Coming soon Tao.dance- dance then wherever you may be for I am the oak tree of nature's dance said (s)he
If you are going to help save 2020s world from extinction (let alone putin!) the top 50 people you'll need to learn and action with will be a deeply personal combo- GAMES OF WRJ #1 edit 50 playing cards from WRJ -ask a friend to do likewise- see how many common choices you made -then choose one to keep your friend had not chosen and voce versa - by all means add in your own selections- keep updating your 50 cards aide memoire.. bon courage - who need to be at WRJ? rsvp chris.macrae@yahoo.co.uk..*
9/8/18 paul oyer: fei-fei li : lei zhang - WE WELCOME q&a THE MORE MATHEMATUCAL OR HUMAN THE BETTER chris.macrae@yahoo.co.uk MA stats cambridge 1973

2016 bangladesh schools go edigital nationwide :: brookings video :: Bangla video :: brac how's that
1/1/21 we have entered the most exciting decade to be alive- by 2030 we will likely know whether humans & tech wizards can save futureoflife- tech surveys indicate odds of accomplishing this greatest human mission would be lot less without spirit of a chinese american lady at stanford-...
bonus challenge for those on road to glasgow cop2 nov2021: future 8 billion peoples want to value from 2021 rsvp chris.macrae@yahoo.co.uk

GAMES of world record jobs involve
*pack of cards: world record jobs creators eg fei-fe li ; fazle abed ...
*six future histories before 2021 starts the decade of empowering youth to be the first sustainable generation.

problem 99% of what people value connecting or doing to each other
has changed (and accelerated in last three quarters of a century- while laws, culture and nature's diversity and health are rooted in real-world foundations that took mother earth 1945 years to build with -and that's only using the christian calendar

1995 started our most recent quater of a century with 2 people in Seattle determined to change distribution of consumers' markets - the ideas of how of bezos and jack ma on what this would involve were completely different except that they changed the purpose of being online from education knowledge to buying & selling things -
nb consuming up things is typically a zero-sum game or less if done unsustainable- whereas life-shaping knowhow multiplies value in use
from 1970 to 1995 knowhow needed to end subsistence poverty of over a billion asian villagers was networked person to person by women with no access to electricity grids- their number 1 wrjc involved partnerships linked by fazle abed - borlaug's crop science was one of the big 5 action learnings -its person to person application saved a billion people from starvation; the first 185 years of the machie age started up bl glasgow university's smith an watt in 1760 had brought humans to the 2 world wars; when people from nearly 200 nations founded the united nations at san francisco opera house 1945 chances of species survival looked poor- miraculous;y one mathematician changed that before he died 12 years later- john von neumann's legacy was both the moon race and twin artificial intel labs - one facing pacific ocean out of stanford; the other facing the atlantic out of mit boston .. who are top job creating economists by practice - health -refugee sports green hong kong..where are top tour guides around billionaire 1 2 around poverty,,, we the peoples ...

Thursday, December 31, 1970

0:52 welcome you all to this discussion between Geoffrey Hinton University professor emeritus at the University of Toronto known to many as the Godfather of deep learning and Fei-Fei Li 1 the inaugural sequola professor in computer science at Stanford University where she is co-director of human centered AI Institute thanks radical Ventures and the other event partners for joining with UFT .. 1:40 Professor Hinton and his students pionerred UFT tradition of Excellence graduates together with Partners at the vector Institute and at universities around the world are advanced machine learning; later this fall faculty, students and partners will begin moving into phase one of the beautiful new Schwarz reesman Innovation campus just across the street you may have noticed a rather striking building by creating Canada's largest university- based Innovation Hub made possible by a generous and Visionary gift from Heather reesman and Jerry Schwarz The Innovation campus will be a focal point for AI thought leadership hosting both the Schwarz reesman institute for technology and Society led by Professor jillan Hadfield and the vector Institute it's already clear that artificial intelligence and machine learning are driving Innovation and value creation across the economy they are all also transforming research in fields like drug Discovery medical Diagnostics and the search for Advanced Materials of course at the same time there are growing concerns over the role that AI will play in shaping Humanity's future so today's conversation clearly addresses without further Ado let me now introduce today's moderator Jordan Jacobs 3:30 Jordan is managing partner and co-founder of radical Ventures a leading Venture Capital firm supporting AI based Ventures here in Toronto and around the world earlier he co-founded layer 6 Ai and served as co-ceo prior to its acquisition by TD Bank Group which he joined as Chief AI officer. Jordan serves as a director of the Canadian Institute for advanced research and he was among the founders of the vector Institute a concept that he dreamed up with Tommy Putin and Jeff Hinton, Ed Clark and a few others 4:14 Jacobs come on up uh thanks very much Merrick this is the first in our uh annual four-part series of AI founder master classes that we run at radical this is the third year we've done it and today's the first one of this year we do it in person and online so we've got uh thousands of people watching this online 4:55 Radical Ventures works in partnership with the vector Institute and machine intelligence Institute in Alberta and with Stanford Jeff is often called The Godfather of artificial intelligence. he's won the turing award he is a professor amus University of Toronto,co-founder of the vector Institute also mentored in a lot of the people who have gone on to be leaders in AI globally including at the big companies and uh many of the top research labs in the world in Academia Fei-FeiFay is the founding director of the Stanford HAI Institute for human centered AI, member of the National Academy of Engineering in the US the National Academy of Medicine and the American Academy of Arts and Science; during 2017-8 she took up role as a vice president at Google as Chief scientist of AIML at Google Cloud; there's many many other things we could say about Fei-fei but she also has an amazing number of students who have gone on to be leaders in the field globally and uh really importantly and she has a book coming out in a couple of weeks . it's called: the world's I see -curiosity exploration and Discovery at the dawn of AI . Jeff wrote on back cover of book:Fei-Fei Lee was the first computer vision researcher to truly understand the power of big data and a work opened the floodgates for deep learning she delivers an urgent clear eyed account of the awesome potential and danger of the AI technology that she helped to unleash and her call for action and Collective responsibility is desperately needed at this pivotal moment in history 7:58 Fei-Fei and Jeff, can we go bac to the Big Bang moment Alex net winning image net compertition 2012; maybe Jeff do you want to take us through from your perspective that moment which is uh 11 years ago now okay so in 2012 two of my very smart graduate students um won a competition a public competition and showed that deep neural networks could do much better than the existing technology now this wouldn't have been possible without a big data set that you could train them on up to that point there hadn't been a big data set of labeled images and faay was responsible for that data set and I'd like to start by asking fei-feim whether there were any problems in putting together that data set? fei-fei: thank you Jeff and Jordan it's really fun to be here so yes the data set that Jeff you're mentioning is called imagenet and I began building it um 2007 and spent the the next three years pretty much with my graduate students building it and you asked me was there a problem building it 9:07 where do I even begin um even at the conception of this project I was told that it it really was a bad idea I was a young assistant professor my first year as assistant professor at Princeton and for example a very very respected mentor of mine in the field uh if you know the academic jargon these are the people who will be writing my tenure evaluations ;actually told me really out of their good heart that please don't do this ..you might have trouble getting tenure if you do this; (Jeff yu are takking about Jitebdra malik - ffl mmm) then I also tried to invite other collaborators and nobody in machine learning or AI wanted to even go close to this project and of course no funding interviewer ok could you just describe imagenet for people who are not familiar with what it was fei-fei li- so imagenet was conceived around 2007, and the the reason I conceived ir was actually two fold: one is I was trained as a scientist; to me doing science is chasing after northstars and in the field of AI especially visual intelligence, for me object recognition the the ability for computers to recognize there's a table in the picture or there's a chair is called object recognition has to be a North star problem in our field and I feel that we need to really put a dent in this problem so I want to Define that Northstar problem that that was one aspect of image net second, image net was recognizing that machine learning was really going in circles a little bit at that time that we were making really intricate models without the kind of data to drive the machine learning. of course in our jargon it's really the generalization problem and I recognizef that we really need to hit a reset and rethink about machine learning from a datadriven point of you so I wanted to go crazy and make a data set that no one has ever seen in terms of its quantity and diversity and and and everything so image net after 3 years was a curated data set of internet images ; that's totaled 15 million images across 22,000 Concepts object Concepts and that was the the data set jeff: just for comparison at the same time in Toronto we were making a data set called CA 10 that had 10 different classes and 60,000 images and it was a lot of work was generously paid for by cfar at 5 cents an image a intervewer: so you turn the data set into a competition just walk us through what that meant and then we'll we'll kind of fast forward to 2012 fei-fei - so we made the data set in 2009 ;we barely made it into a poster in a in a academic conference and no one paid attention so I was a little desperate at that time time and I believe this is the way to go; and we open sourced it but even with open source it wasn't really picking up so my students and I thought well let's get a little more drive off the competition let's create a competition to invite the worldwide research Community to participate in this problem of object recognition through images 13:27 so we made a imagenet competition and the first feedback we got from our friends and colleagues is it's too big eg you can not fit it into a hard drive let alone memmory so we actually created a smaller data set called the challenge data set which is only 1 million images across 1,000 categories instead of 22,000 images and that would was Unleashed in 2010. I think you guys noticed it in 2011 2012 Alex Krizhevsky with Ilya  Sustever jeff right yes and so in my lab we already had deep neural networks working quite well for speech recognition and Ilya said we really ought to be able to win the imagenet competition we and he tried to convince me that we should do that and I said well you know it's an awful lot of data and he aslo tried to convince his friend Alex kevki and Alex wasn't really interested so Iylya actually pre-processed all the data to put it in just the form Alex need needed -we Shrunk the size of the imagenet Alex and then Alex eventually agreed to do it ; meanwhile in yan lecun lab in New York yam was desperately trying to get his students and posts to work on this data center because he said the first person to apply convolutional Nets to this set was going to win and none of his students were interested they were all busy doing other things and so Alex and Ilia got on with it 15:10 and we discovered by running on the previous year's 2011 competition that we were doing much better than the other techniques and so we knew we were going to win the 2012 competition and then there was this political problem which is um we thought if we show the neural network win this competition the computer vision people particular will say well that just shows it's not a very good data set 15:40 so we had to get them to agree ahead of time that if we won the competition we'd have proved that neural networks worked;so I actually called up jitendra and we talked about data sets we might run on and um my objective was to get to jitendra to agree that if we could do imagenet then neuron Nets really work worked and after some discussion and him telling me to do other data sets we eventually agreed okay if we could do imsagenet we'd have shown neural Net's work ; jendra remembers it as he suggested imagenet and he was the one who told us to do it but it was actually a bit the other way around uand we did it and it was amazing we got just over half the error rate of the standard techniques and the standard techniques have been tuned for many years by very good researchers Fei-Fei I remember getting a phone late one evening from my students who was running the prep' it was beginning of October that year that computer vision Fields uh International uh conference iccv 2012 was happening Florence Italy we already booked a workshop annual workshop at the conference we will be an announcing the winner it's the third year so a couple of weeks before we have to process the and frankly the previous two years results didn't excite me and I was a nursing mother at that time so I decided not to go to the third year so I didn't book any tickets I'm just like too far for me and then the results came in that evening phone callr came in and I remember saying to myself darn it Jeff now I have to get a ticket to Italy because I knew that was a very significant moment especially it was on convolution on network which I learned as a graduate student as a classic as a machine learning researcher I knew history was in the making yet imagenet was being attacked it was just a very strange it was exciting moment intervewer: let's just go back for a little bit both of you have had to persevere through the moments that you just described but kind of throughout your careers can you just go back Jeff maybe in start give us a background to why did you want to get into AI in the 21:18 jeff first place I did psychology as an undergraduate; I didn't do very well at it and I decided they were never going to figure out out how the mind worked unless they figured out how the brain worked and so I want to figure out how the brain worked and I want to have an actual model that worked so you can think of understanding the brain as building a bridge there's experimental data and things you can learn from experimental data and there's things that will do the computations you want things that will recognize objects and they were very different and I think of 21:55 it as you want to build this bridge between between the data and the competence the ability to do the task and I always saw myself as starting at the end of things that work but trying to make them more and more like the brain but still work other people tried to um stay with things justified by empirical data and try and have theories that might work um but we're trying to build that bridge and not many people were trying to build the Bridge Terry sinowski was trying to build a bridge from the other end and so we got along very well um a lot of people doing trying to do computer vision just wanted something that worked they didn't care about the brain and a lot of people who care about the brain wanted to undersand how neurons work and so on but didn't want to think much about the nature of the computations and I still see it as we have to build this bridge by getting people who know about the data and people know about what works to connect so my aim was always is to make things that could do Vision but do Vision in the way that people do i interviewer okay so we're going to come back to that because I want to ask you about the most recent developments and how you think that they relate to the brain FAA you an so Jeff just to kind of put a framework on where you started UK to the US to Canada by mid to late 80 you come to the Canada in 87 along that route funding and interest in neural Nets and the way the approaches that you're taking kind of goes like this but I would say mostly 23:37 fei-fei you started your life in a very different place can you walk us through a little bit of how you came to AI 23:44 dei-fei so I started my life uh in China (where my parents were chemistry and physics teachers) ; theb when I was 15 year old my parents and I came to paesippany New Jersey so I became a new immigrant and where I started was first English because I didn't speak that language; and just working in laundries and restaurants and so on but at high school I had a passion for physics I don't know how it got into my head but I wanted to go to Princeton because all I know was Einstein was there Well-he wasn't there by the time I got in! I'm not that old ; but there was a statue of him and the one thing I learned in physics beyond all the math and all that is really the audacity to ask the craziest questions like the smallest you know particles of the world or the boundary of space time and beginning of universe and along the way I discover brain as a third year course based on Roger Penrose books 25:07 heff probably better you didn't tahe jim roo seriously 25:20 fei-fei yeah but i got excited by the brain and by the time I was graduating I wanted to ask the most audacious question as a scientist and to me the absolute Most Fascinating audacious question of my generation that was Intelligence? so I went to caltech to get a dual PhD in Neuroscience with Kristoff and in AI with Petro perona so I so Echo Jeff what you said about Bridge because because that five years allow me to work on computational neuroscience and look at how the mind works as well as to work on the computational side and and try to build that computer program that can mimic the human brain so that's that's my journey it starts from physics interviewer: okay so your Journeys intersect at immagenet 2012 by the way I met Jeff when I was a graduate student right I remember I used to go visit Petro's lab yeah in fact he actually offered me a job at celtech when I was 70 you would have been my adviser no I would not not when I was 70 okay so we intersect it at imagenet for those in the field everyone knows that imagenet is this bing bang moment and subsequent to that first the big tech companies come in and basically start buying up your students and you get them into the companies I think they they were the first ones to realize the potential of this I'd like to talk about that for a moment but kind of fast forwarding I think it's only now since chat GPT that the rest of the world is catching up to the power of AI because finally you can play with it ..you can experience it you know in the boardroom they can talk about it and then go home and then you know their the 10-year-old kid has just written a dinosaur essay for fifth grade with chat 27:22 so that kind of transcendent experience of everyone being able to play with it I think has been a huge shift but in the period in between which is 10 years there is uh kind of this explosive growth of AI inside the big tech companies and everyone else is not really noticing what's going on can can you just talk us through your own experience uh because you experienced a kind of a ground zero 27:48 it's difficult for us to get into the frame of everybody else not realizing what was going on because we realized what was going on so a lot of the universities you'd have thought would be right at the Forefront were very slow in picking up on it so MIT for example and Berkeley I remember going even to talk in Berkeley in I think 2013 um when already AI was being very successful in computer vision and afterwards a graduate student came up to me and he said I've been here like four years and this is the first talk I've heard about neural networks they're really interesting 28:29 yws while Stanford got excited abour NN MIT did not; they they were rigidly against having neural Nets and the imagenet moment started to wear them down and now they're big proponents of neuronet but but it's hard to imagine now but um around 2010 or 2011 there was the computer vision people very good computer vision people were really adamantly against neural Nets they were so against it that for example one of the main journals had a policy not to referee papers on neural Nets at one point just send him back. 29:13 don't referee them it's a waste of time eg yann sent a paper to a conference where he had a neural net that was better at identifying at doing segmentation of pedestrians than the state-ofthe-art and it was rejected and it was one of the reasons it was rejected was one of the referees said this tells us nothing about Vision because they had this view of how computer Vision Works which is you study the nature of the problem of vision you formulate an algorithm;that'll solve it you figure out how to implement that algorithm and then you publish a paper and the fact it doesn't work I have to defend my field not everybody not everybody so there are people who are but most of them were adamantly against neural Nets and then something remarkable happened after the image net competition which is they all changed within about a year ; all the people who been the biggest critics of neural Nets started doing neural Nets much to our sharinr and some of them did it better than us so zissman in Oxford for example made a better neural net very quickly um but they they behaved like scientists ought to behave which is they had this strong belief this stuff was rubbish because of imagenet we could eventually show that it wasn't and then they changed so that was that was very comforting and just to carry it forward so the what you're trying to show is you're trying to label using the neural Nets these 15 million images accurately you've got them all labeled in the background so you can measure it the error rate when you did it dropped from 26% the year 30:49 before I think to 15.3 % and then it subsequently keeps improving6 31:03 and in subsequent years people are using more powerful neural Nets and it continues to drop to the point where it passes it surpasses 2015 so there is a Canadian very smart Canadian underground who joined my lab his name is Andre and he got bored One Summer and said I want to measure how huge humans do so you should go read his blog so he hosted human doing image net h parties he had to bribe them with pizza 31:39 I think my students in the lab and they got to accuracy about 3.5% I think so humans basically make mistakes about 3% of the time right and then I think 20 2016 I 31:57 think a resonant passed it right it was resonet um is that Year's winning algorithm passed the human performance and then ultimately you had to retire the competition because it was so much better than humans that it -and we had to retire because we run out of funding 32:15 incidentally that student started life at the University of Toronto before he went to your lab and then he went to be head of search at Tesla okay first of all he came to Stanford to be a PhD student and yesterday night we were talking actually there was a breakthrough dissertation you know in the middle of this and then he became part of the founding team of open AI but then he went to Tesla and then he went to Tesla and then he had then he thought better of he back but I do want to answer your question of that 10 years it well 32:56 there's a couple of developments along the way Transformers right so the Transformer paper is written the research done in WR paper written inside Google how does transformer approach innovate whats done with ai BARD SAYS: The transformer approach innovates on what has been done with AI in a number of ways. First, it is able to process long sequences of data efficiently and effectively. This is because transformers use a self-attention mechanism, which allows them to learn long-range dependencies in data. Second, transformers are able to learn complex relationships between different parts of a sequence. This is because transformers use a parallel architecture, which allows them to process all parts of a sequence simultaneously. Third, transformers are able to generalize to new data well. This is because transformers are trained on large datasets of unlabeled data. These innovations have allowed transformers to achieve state-of-the-art results on a wide range of natural language processing tasks, including machine translation, text summarization, and question answering. Transformers have also been used to develop new AI applications, such as large language models like me. Large language models are able to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. Overall, the transformer approach has revolutionized the field of AI. It has enabled the development of new algorithms and applications that are more powerful and versatile than anything that came before. Here are some specific examples of how the transformer approach has innovated on what has been done with AI: Machine translation: Transformers have achieved state-of-the-art results on a wide range of machine translation tasks. For example, the Google Translate system now uses transformers to translate between over 100 languages. Text summarization: Transformers have also achieved state-of-the-art results on text summarization tasks. For example, the BART model is able to generate summaries of text that are more accurate and informative than summaries generated by previous models. Question answering: Transformers have also achieved state-of-the-art results on question answering tasks. For example, the T5 model is able to answer questions about text in a comprehensive and informative way. Large language models: Transformers have also been used to develop large language models, such as me. Large language models are able to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. The transformer approach has also been used to develop new AI applications in other domains, such as computer vision and natural language processing. Overall, the transformer approach has had a major impact on the field of AI. It has enabled the development of new algorithms and applications that are more powerful and versatile than anything that came before. The transformer breakthrough was published by a group of eight researchers at Google in 2017. The paper, titled "Attention Is All You Need", was published in the journal Nature. The authors of the paper are: Vaswani, Ashish Shazeer, Noam Parmar, Niki Joulin, Jakob Uszkoreit, Jakob Llion Jones Danihelka, Ilya Guez, Aidan Chorowski, Jan This paper is considered to be one of the most important papers in the field of AI in recent years. It has led to the development of new algorithms and applications that are more powerful and versatile than anything that came before. The transformer architecture has been used to achieve state-of-the-art results on a wide range of natural language processing tasks, including machine translation, text summarization, and question answering. It has also been used to develop new AI applications, such as large language models like me. The transformer breakthrough is a testament to the power of collaboration and open research. The authors of the paper were able to achieve this breakthrough by building on the work of many other researchers, and by sharing their work with the world.
another Canadian is a co-author there Aiden Gomez Who's now the CEO and founder of and co-founder of coher um who I think was a 20-year-old intern at Google brain when co-authored the paper so there's a tradition of Canadians being involved in these breakthroughs but Jee you were at Google uh when the paper was written was there an awareness inside Google of how important this would be I don't think 33:33 fei-fei there was maybe the authors knew but it took me several years to realize how important it was and at Google people didn't realize how important it was until Bert so Bert used Transformers and Bert then became a lot better at a lot of natural language posting benchmarks for a lot of different tasks and that's when people realized Transformers were special
BARD SAYS
BERT, or Bidirectional Encoder Representations from Transformers, is a language model developed by Google AI. It is one of the first language models to use the transformer architecture, and it has achieved state-of-the-art results on a wide range of natural language processing tasks. BERT is a bidirectional language model, which means that it can learn the context of words in a sentence from both the left and the right. This is in contrast to previous language models, which were only able to learn the context of words from the left. BERT is trained on a massive dataset of text and code, which allows it to learn the relationships between words and phrases. Once BERT is trained, it can be fine-tuned for specific natural language processing tasks, such as machine translation, text summarization, and question answering. BERT is a powerful tool for natural language processing, and it has been used to develop a wide range of new applications. For example, BERT is used in the Google Search engine to improve the quality of search results. It is also used in the Google Assistant to improve the accuracy of natural language processing tasks, such as speech recognition and text generation. Here are some specific examples of how BERT has been used to achieve state-of-the-art results on natural language processing tasks: Machine translation: BERT has been used to develop machine translation systems that are more accurate and fluent than previous systems. Text summarization: BERT has been used to develop text summarization systems that are able to generate more informative and concise summaries of text. Question answering: BERT has been used to develop question answering systems that are able to answer questions about text in a more comprehensive and informative way. BERT is a powerful tool for natural language processing, and it has been used to develop a wide range of new applications. It is a great example of how the transformer architecture can be used to improve the performance of natural language processing systems.
20 17 FEi_FEI : SO, the Transformer paper was published I also joined Google and I think you and I actually met on my first week I think most of 2017 and 2018 was neuroarchitecture search right I think that was Google's bet Y and there was a lot of gpus being used so it was a different bet so just to explain that neural architecture search essentially means this you get yourself a whole lot of gpus and you just try out lots of different architectures to see which works best and you automate that it's basically automated Evolution for neuronet architectures it's like hyperparameter and it it led to some good quite big improvements but nothing like Transformers and Transformers were a huge Improvement for natural language
neuron architecture search was mostly do image that fei-fei li I do think there's something important in the world overlooked this 10 years between imagenet alexnet and Chad GPT most of the world sees this as a tech tech 10 years you know or or we see it as a tech 10 years in the big Tech there's things Brewing I mean it took sequence to sequence Transformer but things are Brewing but I do think for me personally and for the world it's also a transformation between Tech to society I actually think personally I Grew From a scientist to a humanist in this 10 years because having joined Google for that two years in the middle of the Transformer paper I begin to see the societal implication of this technology it was post Alpha go moment and very quickly we got to the alpha fold moment it was where bias it was creeping out there was privacy issues and then we're starting to see the beginning of disinformation and misinformation and then we're starting to see the talk Bo of job within a small circle not within in a big public discourse it was when I grew personally anxious I feel you know it was also right after Cambridge analytica so that huge 38:16 implication of Technology not AI per se but it's algorithm driven technology on election that's when I had to make a personal decision of staying at Google or come back to Stanford and I knew the only reason I would come back to Stanford was starting this human Center AI Institute to really really understand the human side of this technology so I think this is a very important 10 years even though the it's kind of not in the eyes of the public but this technology is starting to really creep into the rest of our lives and and of course 2022 it's all shown in the under the daylight how profound this is geoffrey there's an interesting foot note to what happened during that period as well which is ultimately you and Ilya and Alex joined Google but before that there was a big Canadian company that had the opportunity to get access to this technology you want I've heard this story but I don't think it's ever been shared publicly maybe do you want to share that story for a second okay so the technology that we were using for the image net um we developed it in 2009 for doing speech recognition for doing the acoustic modeling bit of speech recognition so you can take the Soundwave and you can make a thing called a spectrogram which just just tells you at each time how much energy there is at each frequency so you're probably used to seeing those spectrograms and what you'd like to do is look at a spectrogram and make guesses about which part of which phony is being expressed by the middle frame of the spectrogram and two students um George D and another student who I shared with Gerald Penn called um ABDO um he had a longer name um weall called an uh who was a speech expert George was a learning expert over the summer of 2009 they made a model that was better than what 30 years of speech research have been able to produce and big big teams working on speech research and the model was slightly better not as big as the image net Gap but it was better and that model was then ported to IBM and to Microsoft by George went to Microsoft and ABDO went to IBM and those big speech group started using neural Nets then and I had a third student who been working on something else called nav deep I've D jley and he wanted to take this speech technology to a big company um but he wanted to stay in Canada for complicated Visa reasons and so we got in touch with blackbery rim and we said we've got this new way of doing speech recognition and it works better than the existing technology and we'd like a student to come to you over the summer and show you how to use it and then you can have thebest speech recognition in your cell phone and um they said after some discussions a fairly senior G blackbery said we're not interested so our attempt to give it toCanadian industry failed um and so then nav deep um took it to Google and Google were the first to get it into a product so in 41:50 2012 around the same time as we won the imeagenett competition um George and abdo's speech recognition acoustic model the acoustic model was in there was a lot of work making it a good product and making it have low latency and so on um that came out in the Android and there was a moment when the Android suddenly became as good as Siri at speech recognition and that was a neural net and I think for the people high up in the big companies um that was another ingredient they saw it get this dramatic result for vision but they also saw that it was already out in a product for speech recognition was working very well there too so I think that combination of it does speeech it does vision clearly it's going to do everything we won't see anymore it was a shame it was a shame that Canadian industry didn't um you know I think we might have still had blackberries 43:03 Transformers Google is starting to use this and develop it in a number of different ways open AI where your former student Ilya had left Google been a founder of openai with Elon Musk and Sam mman Greg Brockman and a few others they are working together a very small team to basically take trans well initially the idea was we're going to build AGI and artificial general intelligence ultimately they Transformer paper comes out they start to adopt at some point Transformers and they start to makeextraordinary gains internally that's they're not really sharing publicly in what they're able to do in language understanding and a number of other things they had efforts going on in robotics that spun out people don't really know to the extent what's going on and then chat GPT comes out November 30th last year so 10 months ago gpt2 caught the attention of some of us I think actually I think by the time gpt2 came out um my colleague NLP professor at Stanford I remember he came to me and say F I have a whole different realization of how important this technology is so to the credit of Percy he immediately asked uh Hai to set up a center to study this and don't know if this is contentious in in Toronto, Stanford is the University that coined the term Foundation models and some people call it llm large language model but going Beyond language we call it a foundation model;we created uh the center of research for foundation model before 3.5 came out interviewers describe what a foundation model is just for those who are not familiar that's actually a great question Foundation model some people it has to have Transformer in it I don't know if you it just has to be a very big amount of data free trained with huge amount of data and I think one of the most important thing of a foundation model is the generalizability of multiple tasks you're not training it for example machine translation so in NLP machine translation is a very important task but the kind of foundation model like GPT is able to do machine translation it's able to do conversation summarization and so that's a foundation model and we're seeing that now in multi modality we're seeing a division in robotics in video and so on so um so we created that but you're right the public sees this um in the what did you say October 30th November I think November 30 one of the very important um thing about Foundation models which is for a long time in cognitive science the general opinion was that these neural Nets uif you give them enough training data they can do complicated things but they need an awful lot of training data they need to see thousands of cats um and people are much more statistically efficient that is they can learn to do these things on much less data and people don't say that so much anymore because what they were really doing was comparing what an MIT undergraduate can learn to do on a limited amount of data with what a neural net that starts with random weights can learn to do on a limited amount of data it's unfair if you want to make a fair comparison you take a foundation model that is a neural net that's been trained on lots and lots of stuff and then you give it a completely new task and you ask how much data it need to learn this completely new task and that's called fot learning because it doesn't take much um and then you discover these things are statistically efficient that is they compare quite favorably with people in how much data they need to learn to do a new task so the old kind of inati idea that um we come with lots of innate knowledge and that gives us that makes us far superior to these things to just learn everything from data people are pretty much given up on that now because you take a foundation model that had no innate knowledge but a lot of experience and then you give it a new task it learns pretty efficiently it doesn't need huge amounts of data you know my PhD is in one shot learning 47:57 but it's very interesting even in bayesan framework you could pre-train but it's i48:02 only in the neuron Network kind of pre-training really can get you this multitask right 48:09 so okay so this basically gets productized in chat GPT the world 48:16 experiences it it's which is only 10 months ago although for some of us it feels like much longer forever because 48:23 suddenly you have this uh you know you had this big bang that happened a long time ago that I think for a long time no 48:30 one really saw the results of it suddenly I mean my comparison would be there's planets that are formed and 48:36 stars that are visible and you know everyone can experience the result of what happened 10 years before and then transformed Etc so the world suddenly 48:45 becomes very excited about what I think feels to a lot of people like magic um 48:50 something that they can touch and they can experience and gives them back a feed a feedback in whatever way they're 48:56 asking for it whether they're asking they're putting in text prompts and asking for an image to be created uh or 49:02 video or text and asking for more text to come back and answer things that you would never be able to expect and 49:08 getting those unexpected answers so it feels a little bit like magic I I my 49:14 personal view is that you know we've always moved the goal line in ai ai is always the thing that we couldn't do 49:20 it's always the magic and as soon as we get there then we say that's not AI at all there's people around that say that's not ey all and moov the Cur the 49:27 uh the goal line in this case what what was your reaction when it came out I I 49:32 know part of your reaction is you quit Google and decided to do different things but what when you first saw it 49:39 what did you think well like fa said gpt2 made a big impression on us all and 49:45 then there was a steady progression also I'd seen things within Google before gbt 49:50 4 and gbt 3.5 um that were just as good like Palm 49:56 um so that in itself didn't make a big it was more Palm that made an impression 50:02 on me within Google because Palm could explain why a joke was funny and I'd always just use that as a you know we'll 50:09 know that it really gets it when it can explain why a joke is funny and palm could do that um not for every joke but 50:15 for a lot of jokes and so incidentally these things are quite good now at explaining my jokes are funny but they're terrible at telling jokes and 50:23 there's a reason which is they generate text one word at a time so if you ask 50:28 them to tell a joke what they do is they they're trying to tell a joke so they're going to try and tell stuff that sounds 50:34 like a joke so they say you know a priest and a badger went into a bar and 50:41 that sounds a bit like the beginning of a joke and um they keep going telling stuff that sounds like the beginning of 50:46 a joke but then they get to the point where they need the punch line and of course they haven't thought ahead they haven't thought what's going to be the 50:52 punch line they're just trying to make it sound like they leading into a joke and then they give you a pathetically weak punchline because they have to come 50:58 up with some punchline um so although they can explain jokes because they get 51:03 to see the whole joke before they say anything they can't tell jokes but we'll fix that okay so I was going to ask you 51:10 if comedian is a job of the future or not you think it's going to 51:15 um probably not all right so so anyway so but what what was your reaction to it 51:23 I again you've seen things behind the scenes along the way a couple of reaction my first reaction is of all 51:31 people I thought I knew the power of data and I was still olded by the power 51:39 of data that was a technical reaction I was like darn it I should have made a 51:44 bigger image that no but maybe not but but that was really still funding is the 51:52 problem yeah so um that was for second um when I saw the public Awakening 51:59 moment to AI with chat GPT not just the gpt2 technology moment I generally 52:07 thought thank goodness we've invested in human centered AI for the past four 52:12 years thank goodness we have built a bridge with the policy makers with the 52:18 public sector with the Civil Society we have not done enough but 52:24 thank is that that conversation had started we were participating it we were 52:32 leading some part of it for example we as Institute at Stanford were leading 52:38 the critical National AI research Cloud bill that is still going through 52:45 Congress um right now and not right now 52:50 senate senate it's it's by camera so at least it's moving the Senate um because 52:57 we predicted the the societal moment for this Tech we don't know where when it 53:04 would come but we knew it would come and it just it was just a sense of urgency 53:10 honestly I I feel that this is the moment we really have to rise to not 53:17 only our our passion as technologist but 53:22 responsibility as humanists and so you both I think the common reaction of you 53:28 both has been we have to think about the both the opportunities of this but also 53:33 the negative consequences of it so for me there was something I realized and didn't realize till very late and what 53:40 got me much more interested in the societal impact was like fa said the power of data these 53:49 big chat Bots have seen thousands of times more data than any person could possibly see and the reason reason they 53:55 can do that is because you can make thousands of copies of the same model and each copy can look at a different 54:01 subset of the data and they can get a gradient from that of how to change their parameters and they can then share 54:07 all those gradients so every copy can benefit from what all the other copies 54:12 extracted from data and we can't do that if if suppose you had 10,000 people and 54:18 they went out and they read 10,000 different books and after they've each read one book all of them know what's in 54:24 all the books we could get to be very smart that way and that's what these things are doing and so it makes them far superior to us 54:31 there's some schooling that we trying to do that but not not in the way but education is just hopeless I mean hardly 54:38 worth paying for um so except University of Toronto and 54:46 S I've tried to explain to friends that Jeff has a very sarcastic sense of humor and if you spend enough time around 54:52 you'll get it but I I'll leave it to you to decide whether that was so the way we exchange knowledge roughly speaking this 54:58 is something of a simplification but I Pur to sentence and you figure out what you have to change in your brain so you 55:04 might have said that that is if you trust me um we can do that with these 55:10 models too if you want one year night architecture to know what another architecture knows which is completely different architecture you can't just 55:16 give it the weights so you get one to mimic the output of the other that's called distillation and that's how we 55:22 learn from each other but it's very inefficient it's limited by the bandwidth of a sentence which is a few hundred bits whereas if you have these 55:28 models these digital agents which have a trillion parameters each of them looks at different bits of data and then they 55:35 share the gradients they're sharing a trillion numbers so you're comparing an ability to share knowledge that's in 55:42 trillions of numbers with something that's hundreds of bits they're just much much better than us at 55:49 sharing so guess Jeff that so I agree with you at the technology level but it 55:57 sounded like for you that's the moment that got you feeling very negative 56:03 that's the moment I thought um we're history yeah yeah I'm less negative than you and I'll explain later but but I 56:10 think that's where well let's actually let's talk about that explain why you 56:15 are optimistic and let's understand why you are more pessimistic I'm pessimistic 56:21 because the pessimists are usually right I thought I was a pessimist too we have 56:29 this conversation so I don't know if I should be called a an optimist I I think 56:35 I'm um look when you came to a country When You're 15 now speaking a single bit 56:42 of language and starting from zero dollars there's something very pragmatic 56:48 in my thinking I I think technology our human 56:53 relationship with techn technology is a lot messier than an academian typically 57:00 would predict because we come to Academia in the Ivory Tower we want to make a discovery we want to build a 57:07 piece of technology but we tend to be purist but when a technology like AI hit 57:14 the ground and reach the societal level it is inevitably messy entangled with 57:21 what humans do and this is where maybe you call it optimism is my sense of humanity I believe in 57:28 humanity I believe in the not only the resilience of humanity but also a 57:34 collective will the Arc of History um is dicey sometimes but we if we do the 57:43 right thing we have a chance we have a Fighting Chance of creating a future 57:48 that's better so what I really feel is not delusional optimism at this point is 57:55 actually a sense of urgency of responsibility and uh and one thing Jeff 58:00 I think I really hope you do feel positive is you look at the students of this generation in my class I teach a 58:07 600 uh undergrad class every spring on on introduction of deep learning 58:13 computer vision this generation compared to even five years ago is so different they walk 58:20 into our class not only wanting to learn deep learning Transformers Nai they want 58:26 to talk about ethics they want to talk about policy they want to understand privacy and bias and I think that really 58:33 is where I see the the the humanity rising to the occasion and I 58:40 think it's fragile I mean look at what's going on in the world in uh Washington 58:46 in it's very fragile but I think if we recognize this moment there is hope so I 58:53 see the same thing I don't teach undergraduates anymore but I see it in sort of more Junior faculty members so 58:59 at the University of Toronto for example two of the most brilliant young professors went off to anthropic to work 59:05 on alignment um Roger gra is coming back again I hope um and Ilia for example um 59:13 is now full-time working on alignment yes so there really is a huge shift now 59:19 and I think I'm unlikely to have ideas that will help solve this problem but I can encourage these younger people these 59:27 younger people around 40 um thank you to to work on these ideas and they 59:34 really are working on them now they're taking it seriously yeah we as long as we put the brilliant most Brilliant 59:40 Minds like manying of you I'm looking in the audience and online onto this 59:45 problem this is where my hope comes from so Jeff you left Google uh in large part to be able to go 59:52 and talk about this in freely in the way that you wanted to and basically 59:58 actually that's not really true that's that's the media story and it sounds good I left Google because I was old and 1:00:03 tired and wanted to retire and watch Netflix and and I happened to have the 1:00:10 opportunity at that time to say some things I've been thinking about responsibility and not have to worry 1:00:16 about what how Google would respond if we have time more like if we have time we'll come back to the Netflix 1:00:22 recommendations um in the meantime so but you did go out and uh start speaking 1:00:27 pretty significantly um in the media I think you've both spoken to probably more polititions in the last eight 1:00:33 months than in your lives before from presidents and prime ministers uh you 1:00:39 know right through um Congress Parliament Etc I Jeff can you explain what your 1:00:47 concern was what you were trying to accomplish in voicing it and whether you 1:00:52 think that has been effective yeah so people talk about AI risk but 1:00:58 there's a whole bunch of different risks so there's a risk that it will take jobs 1:01:03 away and not create as many jobs and so we'll have a whole underclass of unemployed people and we need to worry 1:01:10 hard about that because the increase in productivity AI is going to cause is not going to get shared with the people who 1:01:17 lose the jobs rich people are going to get richer and poor people are going to get poorer and even if you have basic 1:01:22 income that's not going to solve the problem of human dignity of many people want to have a job to feel they're doing 1:01:30 something important um including academics and uh so that's one problem 1:01:36 then there's the problem of fake news which is a quite different problem then there's a problem of battle robots 1:01:41 that's a quite different problem again all the big defense departments want to make battle robots and nobody's going to 1:01:47 stop them and it's going to be horrible and maybe eventually after we've had some Wars with battle robots um we'll 1:01:54 get something like the Geneva conventions like we did with chemical weapons it wasn't until after they were used that people could do something 1:02:00 about it um then there's the existential risk and the 1:02:07 existential risk is what I'm worried about the existential risk is that Humanity gets wiped out because we've 1:02:14 developed a better form of intelligence that decides to take control and if it 1:02:19 gets to be much smarter than us so there's a lot of hypotheses here it's a time of huge uncertainty shouldn't take 1:02:25 anything I say too seriously um so if we make something much smarter 1:02:31 than us because these digital intelligences can share much better so can learn much 1:02:37 more we will inevitably get those smart things to create sub goals if you want 1:02:42 if you want them to do something in order to do that they'll figure out well you have to do something else first like if you want to go to Europe you have to 1:02:48 get to the airport um that's a sub goal so they will make sub goals and there's 1:02:54 a very obvious sub goal which is if you want to get anything done get more power 1:03:00 if you get more control it's going to be easier to do things and so anything that has the ability to create sub goals will 1:03:05 create the sub goal of getting more control and if things much more intelligent than us want to get control 1:03:12 they will we won't be able to stop them so we somehow have to figure out how we stop them ever wanting to get control 1:03:19 and there's some hope these things didn't evolve they're not nasty competitive things that however we make 1:03:26 them um they're Immortal so with the digital intelligence you just store the 1:03:32 weight somewhere and you can always run it again on other Hardware so they really we've actually discovered the 1:03:38 secret of immortality the only problem is it's not for us we're mortal um but 1:03:43 these other things are Immortal and that might make them much nicer because they're not worried about dying and they don't have to sort of Greek gods they 1:03:51 well they're very like Greek gods and I have to say something that Elon Musk 1:03:56 told me this is Elon musk's belief that um yes we are the kind of Bootloader for 1:04:04 digital intelligence we're this relatively dumb form of intelligence that was just smart enough to create computers and Ai and that's going to be 1:04:11 a much smarter form of intelligence and Elon mus thinks it'll keep us around because the world will be more 1:04:17 interesting with people in it than without which seems like a very thin thread to hang your future from but it 1:04:24 to What Faith said it's very like the Greek gods model that the gods have people around to have fun with okay can 1:04:31 I comment on nothing I said was controversial yeah no not at all um so I want to bucket 1:04:38 your four concerns uh economy Labor uh 1:04:44 disinformation and weaponization and then the extinction G discrimination and 1:04:50 bias okay so I want to bucket them in two buckets the the the the Greek guard 1:04:57 Extinction is the extinction bucket everything else I would call catastroph catastrophic merely 1:05:05 catastrophic catastrophic danger and I want to comment on this I think that um 1:05:12 one thing I really feel is my responsibility as um someone in the AI 1:05:18 system ecosystem is um making sure we 1:05:23 are not talking hyperbolically especially with public policy makers the 1:05:30 extinction risk is Jeff with all due respect is a really interesting 1:05:37 thought process that Academia and think tanks should be working on that's what I 1:05:43 thought for many years I thought it was a long way off in the future and having philosophers and academics working on it 1:05:48 was great I think he's much more urgent it might but this process is not just 1:05:54 machines alone humans are in this messy process so I think there is a lot of 1:06:00 nuance for example as for example we talk about nuclear I know nuclear is much more narrow but if you think about 1:06:08 nuclear um it's not just the theory of uh Fusion or fishion or whatever it's 1:06:15 really obtaining uranium or plutonians the system engineering the talents and 1:06:22 all that I'm sure you watch the movie open H so here if we're going towards that that way I think we have a Fighting 1:06:29 Chance more than a Fighting Chance because we are human society we're going to put guard rails we're going to work 1:06:35 together I don't want to paint the picture that tomorrow we're going to 1:06:41 have all these robots especially in in robotic form in in physical form 1:06:48 creating the machine overlords I think I I really think we need to be careful in 1:06:53 in this but I don't disagree with you that this is something we need to be thinking about so this is the extinction 1:06:58 bucket the catastrophic risk bucket I think it's much more real I think we 1:07:04 need the smartest people and the more the merrier to work on so just just to 1:07:09 comment on each one of them weaponization right this is really real I completely agree with you we need 1:07:17 International partnership we need potential treaties we need to understand 1:07:22 the parameters and this is humity as much as I'm opt optimistic about 1:07:28 Humanity I'm also pessimistic about our self-destruction uh ability as well as 1:07:34 the destroying each other so we we've got to get people working on this and our friends Stewart Russell and many of 1:07:41 the even AI experts are are talking about this um and second bucket you talk 1:07:47 about is disinformation this is again I mean 2024 every Everybody is watching 1:07:54 the US election and how AI will play out you know and I think I think we we have 1:08:02 to get on the social media issue we have to get on the disinformation issue 1:08:08 technically I'm seeing more work now digital authentication technically is 1:08:14 actually a very active area of research I think we need to we need to invest in 1:08:21 this I know Adobe is I know Academia is I think we need to I I hope there's 1:08:26 startups actually in this space looking at digital authentication but we need 1:08:32 also policy and then jobs I cannot agree more I actually you use the most 1:08:39 important work that I think it's really at the heart of our AI debate is human 1:08:44 dignity I you know human dignity is just beyond how much money you make how many 1:08:50 hours you you you work I actually think if we do this right we're going to move 1:08:56 from labor economy to dignity economy in the sense that humans with the help of 1:09:03 machines and collaboratively will be making money because of passion and uh 1:09:10 personalization and expertise rather than just those jobs that are really 1:09:16 grueling and grinding and this is also why human haai at Stanford has a 1:09:22 founding princip of human augmentation we see this in healthcare one of the 1:09:28 biggest earliest day of chat GPT I've got a doctor friend from Stanford 1:09:33 Hospital who walked to me and said fa I want to thank you for CH gbts I didn't do anything but he said he said that we 1:09:41 are using a medical summarization tool from GPT because because this is a huge 1:09:48 a burden on our doctors it's taking time away from patients but because of the 1:09:53 this I get more time and this is a perfect example and we're going to see this more we might even see this in a 1:10:00 blue color uh labor so we have a chance to to to make this right I would add 1:10:08 another concern in the catastrophic concern is actually you talk about power 1:10:16 imbalance one of the power imbalance I'm seeing right now and it's exacerbating at a a huge speed in is up the the 1:10:24 leaving public sector out I don't know about Canada not a single University in 1:10:31 the US today can train a chat GPT in terms of the compute power and I think 1:10:39 combining all universities of us GPT a100 or h100 probably nobody has it but 1:10:46 a100 cannot train a CH gbt but this is where we still have unique data for 1:10:54 curing cancer for fighting climate change for you know um um economics and 1:11:01 legal studies we need to invest in public sector if we don't do it now 1:11:06 we're going to fail entire generation and we're going to leave that power IM 1:11:11 balance um in in such a a dangerous way so I do agree with you I think we've got 1:11:19 so many catastrophic risks and we need to get on this this is why we we need to 1:11:25 work with policy makers and and Civil Society um so I don't know if I'm saying 1:11:30 this in optimistic tone or in a pessimist I sound more pessimistic to myself now but I I do think there's a 1:11:38 lot of work well optimistically since you've both been very vocal about this 1:11:44 over the last 6 eight months there has been a huge shift both as Jeff as you 1:11:49 said key researchers going and focusing on these issues and then public and 1:11:55 policy shifting in a way that governments are actually taking it seriously so I mean you advising the 1:12:01 White House and US Government you've spoken to them as well and you've sat with the prime minister or multiple 1:12:07 prime ministers uh maybe and they're listening right in a way that they 1:12:13 wouldn't have necessarily 10 months ago 12 months ago do you are you optimistic 1:12:18 about the direction that that is going I'm optimistic that people understood that there's this whole bunch 1:12:26 of problems both the catastrophic risk and the existential risk and I agree with fa completely the catastrophic 1:12:32 risks are more urgent in particular 2024 is very urgent 1:12:38 um I am quite optimistic that people are listening there yes yes I agree I think 1:12:44 they're listening but I do want to say first of all who are you listening 1:12:49 from I you know um again I see a 1:12:55 asymmetry between public sector and private sector and even in private sector who are you listening from um 1:13:02 it's not it shouldn't just be big tech and and celebrity startups there is a lot of you know agriculture sector 1:13:09 education sector you know these are and second is then after all these noise 1:13:16 what is a good policy you know we talk about regulation versus no regulation 1:13:22 and and in the I actually don't know where Canada sits there's always EUR uh America innovates 1:13:29 and Europe regulates where's Canada probably in between okay good good for you so I I actually think we need both 1:13:37 incentivization policy building public sector uh you know unlocking the power 1:13:43 of data we have so much data that is locked in our government whether it's 1:13:49 forest fire data Wildlife data traffic traffic data you know the climate data 1:13:56 um and and that's incentivization and then there's good regular regulation for 1:14:03 example we're very um vocal about um you 1:14:09 have to be so careful in regulating where do you regulate up uh Upstream Downstream one of the most urgent regul 1:14:17 regulation point to me is where rubber meets the road is when technology ology 1:14:23 is now in the form of a product or service it's going to meet people 1:14:28 whether it's through medicine food Financial Services transportation and then you've 1:14:36 got these current uh framework they're not far from perfect but we need to 1:14:41 empower these um you know this this existing framework and update them 1:14:48 rather than wasting time and possibly making the wrong decision of of creating 1:14:54 entirely new uh regulatory framework when we have the existing 1:14:59 ones okay so we are uh almost out of time for the discussion part we're going to have a long session of Q&A before we 1:15:07 started though uh I'll ask two last questions one is I mean our view is this 1:15:13 technology is going to impact virtually everything and some of the positive 1:15:18 impacts are extraordinary it is going to help cure diseases like cancer and diabetes and others it's going to help 1:15:24 mitigate climate change um there's just an enormous number of thing invent new 1:15:30 materials uh I see over here someone who's focused on that um that can help in the energy sector and Aerospace and 1:15:38 pharmaceuticals um and that's a big effort at University of Toronto there's but there's this entire world of new 1:15:45 things that could not be done before that now can be done um that it's so it's basically advancing science in a 1:15:52 way that was part of you know either fiction or imagination before it do you are you 1:15:59 optimistic about that part of it I think we're both very optimistic about that I think we both believe it's going to have 1:16:05 a huge impact on almost every field so I think for those in this room 1:16:10 who are actually studying it's an incredibly exciting moment to be coming into it because there's the opportunity to uh get involved in limiting the the 1:16:17 negatives uh the negative consequences but also to participate in creating all those opportunities to solve some of the 1:16:24 problems that have been you know they've been with us as long as we've been around uh as a species so there's uh I 1:16:31 think at least from our perspective this really is one of the most extraordinary moments in human history I hope that 1:16:38 those of you who are uh embarking on your careers actually go out and go after the most ambitious things you know 1:16:45 you can also work on like optimizing advertising and other things but uh or or making more Netflix shows which is 1:16:52 great but also like that yes uh so would my mom who I think is exhausted Netflix 1:16:59 um if you if there's a Turkish or Korean show out there she's seen the very last episode of all um but for for those of 1:17:06 you who are embarking on the career I I my recommendation is you know try and think of the biggest possible Challenge 1:17:12 and uh what you could use this technology to help solve that is incredibly ambitious um and you have 1:17:19 both done that and kind of fought against barriers all the way along to achieve that that there's a room full of 1:17:26 people and a lot of people online and others uh who will see this subsequently I think who are at the beginning stages 1:17:32 of making those decisions I'm guessing you would encourage them to do that too right think as big as possible and go 1:17:38 after the biggest hardest challenges absolutely I mean Embrace 1:17:43 this but I also would encourage this is a new chapter of this 1:17:49 technology even if you see yourself as a technologist and a scientist don't 1:17:55 forget there is also a humanist in you because you need both to make this uh 1:18:03 positive change for the world okay last question and then we'll get into Q&A uh from the 1:18:10 audience are we at a point where these machines have understanding and 1:18:17 intelligence wow that's a last question how many hours do we have 1:18:23 um yes okay I'll come back to the yes 1:18:41 no okay do we have questions from the audience let start in The Far Side you 1:18:47 want to stand up and you're going to be given a mic 1:18:53 hi um thanks my name is Ellie uh this was awesome and thank you so much uh 1:18:59 Jeff your work really inspired me as a UFT student to study cognitive science and it's just amazing to hear both of 1:19:05 you speak um I have a question you mentioned the challenges for um education and for you know enabling 1:19:13 universities to uh Empower students to use this technology and learn and you 1:19:19 also mentioned fa like the opportunity for this to become a dignity economy and 1:19:26 Empower people to just you know focus on personalization and and passion and um 1:19:32 you know their expertise I'm wondering if either of you uh have a perspective on the challenge 1:19:39 that could emerge with overuse and overreliance on AI especially for kids 1:19:45 and students as they're you know on their education career and and they need to be building skills and using their 1:19:51 brain and exercising the meat sack in their head um our our brains don't just 1:19:57 you know continue to to work and not acrw cobwebs if if they're not learning 1:20:03 and yeah I I wonder your thoughts on on burnout and overreliance and just what 1:20:09 happens around descaling and and the ability to learn to paint when you can use stable diffusion or learn to write 1:20:16 like Shakespeare when you can have chat gbt do it for you um and then as those 1:20:21 systems progress and can acre greater insights and and more complex problem solving how that impacts our ability to 1:20:28 do the same so I have one very little thought about that which is when pocket calculators first came out people said 1:20:36 kids will forget how to do arithmetic and that didn't turn out to be a major problem I think kids probably 1:20:42 did forget how to do arithmetic but they got pocket calculators um but it's maybe not a very good analogy because pocket 1:20:49 calculators weren't smarter than them kids could forget it doing arithmetic and go off and do real math um but with 1:20:56 this stuff I don't know I for myself I found it's actually made me much more curious about the world because I 1:21:03 couldn't bear to go to a library and spend half an hour finding the relevant book and look something up and now I can 1:21:09 just ask chat gbt anything and it'll tell me the answer and I'll believe it 1:21:14 which maybe isn't the right thing to do um but it's actually made me more curious about the world because I can 1:21:19 get the answers more quickly so you had years to learn what to 1:21:26 ask well yeah but normally I ask questions about plumbing and things like that so what the 1:21:32 years so um I'll answer this with a very quick story I don't know about you guys 1:21:37 ever since I become Stanford Professor i' I'm always so curious there's a mysterious office in in the University 1:21:45 which is the office of College admission to me they're the most mysterious people 1:21:50 and uh um I never know where they are who they are where they sit till I got a 1:21:56 phone call earlier this uh this year and of course they wanted to talk to me about chat GPT and College admission so 1:22:03 um and of course the question is related to you know do we allow this in the application process and now that there 1:22:11 is chat GPT how to do admission so I went home um and I was talking to my 1:22:17 11-year-old I said well I got this phone call and there's this College admission 1:22:22 question uh um you know what what do we do with ch GPT and uh and students what 1:22:29 if a students wrote the best application should we using CH GPT and blah blah 1:22:34 blah and then I said what would you do I asked my 11year old and he said let me 1:22:41 think about it he actually went back and slept on this or I don't know what happened the next day the next day in 1:22:48 the morning he said I have an answer I said what's your answer he said I think Stanford should admit the top 1:22:56 2,000 students who knows how to use chap GPT the most it was actually at the beginning I 1:23:03 thought that was such a silly answer I like it's actually a really interesting answer is kids already are seeing this 1:23:11 as a tool and they're seeing their relationship with this tool as a enabling empowering tool clearly my 1:23:19 11-year-old had no idea how to measure that what that means and blah blah blah but I think that's how we should see it 1:23:25 in education and we should update our education we cannot shut the tool 1:23:30 outside of our education like what Jeff said we need to embrace it and and 1:23:35 educate humans so that they know how to use the tool to their benefits I I've 1:23:40 incidentally I've met Fay's 11-year-old son he might be the president of Stanford by the time he's 18 so if stord 1:23:47 still exist uh maybe let's go to this side of the room uh in the far 1:23:59 Corner yeah I want to ask about like uh we have really good foundational models 1:24:04 right now but in many of the applications we need kind of a real-time uh performance of the model So like um 1:24:11 how do you see this area of future going uh this area of research going in the future of you know using the abilities 1:24:18 of this expert Foundation models to train you know uh cost smaller models 1:24:24 there's a question you should answer this question I'll leave it to you well you're talking about the inference right 1:24:30 we need to start thinking about the this the performance the inference and also 1:24:35 make fit the model on devices depending on which uh well I mean without getting 1:24:41 into the technical details all these research as well as you know like um even outside of research it's happening 1:24:48 it's uh you want to talk about I thought okay you don't want to talk okay okay 1:24:55 it's happening but I mean it'll take a while but yeah we talk about things he 1:25:02 invests that's true I I can't I can't talk about it until the company says that it's okay to talk about it that's 1:25:09 um okay let's go back in the middle just right 1:25:21 here yeah hi my name is Ariel I'm a third year um in siiz student major in machine 1:25:27 learning at UF as well and then that conversation was pretty great and then thank you PR for hinen and PR Lee um I 1:25:33 just have a question that maybe a lot of undergrad or grad students are interested in this s in this room so um 1:25:39 just like in your like 20s like what um drove you to be like a researcher and 1:25:45 like what drove you into the area of Academia in AI because I'm kind of like confused right now like should I 1:25:51 continue with with like industry or like uh like a direct entry PhD or like um 1:25:57 just like a take a master and then go back to Industry and um I have like one more question that um usually what do 1:26:03 you look for like if I apply um for like a direct entry PhD to your lab like is 1:26:09 that like GPA or publication or recommendation letters could you just like elaborate a bit more on that there 1:26:15 I think there about 300 people in the room and about 6,000 online who want to ask that question to you f face so you 1:26:21 want to start your 20s and oh I got interested in how 1:26:26 the brain works when I was a teenager because I had a very smart friend at school who came into school one day and talked 1:26:33 about Holograms and how maybe memories in the brain were like Holograms and I basically said what's a hologram and 1:26:40 ever since then I've been interested in how the brain works um so that was just 1:26:46 luckily having a very smart friend at school I'm going to be very shamess hly 1:26:52 if you read my book that's actually what the book is about is 1:26:58 a it's a very good book thank you no seriously I actually I told Jordan and 1:27:03 Jeff there's so many AI books about technology and when I started writing this book about AI technology I want to 1:27:10 write a journey especially to the young people especially to the young people of all walks of life not just you know a 1:27:19 certain look um and uh and and that book talks about the Journey of a young girl 1:27:27 you know and in different settings realizing or coming to 1:27:34 understand her own dream and realizing her dream and it's not very different from what Jeff said it starts with a 1:27:40 passion it really did start with a passion a passion against all other voices the passion might come from a 1:27:46 friend it might come from a movie you see it might come from a book you read or it might come from the best subject 1:27:53 in school that you you felt most fun whatever it is and in the students I 1:27:59 [Music] hire I look for that passion I look for 1:28:04 ambition I a healthy ambition of want wanting to make a change not wanting to 1:28:12 get a degree per se and of course technically speaking I look for good 1:28:20 technical background um not just test scores but but hon honestly I would have 1:28:28 never got into my own lab so the standard today is so high so by 1:28:35 the time you apply for a uh PhD or a graduate uh school program you probably 1:28:41 have some track record some it doesn't have to necessar of course if it's Jeff 1:28:46 student I'll take them you know without even um asking question but even if you 1:28:52 and I'm saying this not only to U of student to every student online you can have a very different background you can 1:28:58 come from a under privileged background what I look for is not where you are but 1:29:04 the journey you take that track record shows the journey you take shows your passion and 1:29:10 conviction having read the book I will say that it is it is a very surprising 1:29:16 journey I think to most people uh who will read it um and just a plug if 1:29:21 you're Canada go buy it at Indigo uh you can go to indigo.ca and Order pre-order the book um and uh uh but I think that 1:29:30 people will be surprised and and really enjoy reading and understanding that experience and you'll get a very good 1:29:36 understanding kind of answering that question thank you okay there's about 50 hands up um all right let's go over here 1:29:43 right in the 1:29:49 corner hey thank you for the great talk my name is shalev I'm at Vector Institute working with Sheila MTH um so 1:29:57 I think benchmarks are very important benchmarks are like questions imag net was basically a question and then people 1:30:03 are giving an trying to answer it with models and so right now llms are very hard to evaluate and generalist agents 1:30:10 that take actions are even it's so hard to start thinking about how to evaluate those so my question is about my 1:30:15 question is about questions it's about these benchmarks so two things one if you sat down with gp5 GPT 6 gpt7 and you 1:30:23 had 5 minutes to play with it what questions would you ask that would tell you this is the next generation of these 1:30:29 models and the second is more of a comprehensive Benchmark what is the more comprehensive not five minutes Benchmark 1:30:36 that we need in order to evaluate llms or generalist agents you can choose which one you want to I guess think 1:30:42 about or answer okay thank you thank you for your question it's a very good question I will answer a different 1:30:48 question that's just vaguely related so this issue arose with gbt 4 1:30:54 how do you tell whether it's smart and in particular I was talking to someone 1:30:59 called Hector LEC who used to be a faculty member in computer science and 1:31:05 has beliefs that are almost the diametric opposite of mine but is extremely intellectually honest um and 1:31:13 so he was he was kind of amazed that gbg4 worked and he wanted to know how it could possibly work and so we spent time 1:31:20 talking about that and then I got him to give me some questions 1:31:25 to ask it um and he gave me a series of questions to ask it so we could decide 1:31:31 whether it understood so the question was does it really understand what it's saying or is it just using some fancy 1:31:37 statistics to predict the next word one comment about that is the only way you 1:31:42 can predict the next word really well is to predict what the person is to understand what the person said so you have to understand in order to predict 1:31:49 but you can predict quite well without understanding so does gp4 really understand so a question Hector came up 1:31:55 with was um the rooms in my house are painted white or yellow or blue um I 1:32:02 want all the rooms to be white what should I do and I knew it would be able to do 1:32:07 that so I made the question more difficult so I said the rooms in my house are painted white or yellow or blue yellow paint Fades to White within 1:32:15 two within a year in two years time I'd like all the rooms to be white what should I do and chat G PT oh and I said 1:32:22 and why if you say and why it'll give you the explanation chat GPT just salted it said you should paint the um blue 1:32:28 rooms white um it said you don't need to worry about the yellow rooms because they'll Fade to White it turns out it's 1:32:35 very sensitive to the wording if you don't use fade but you use change I got 1:32:40 a complaint from somebody who said I tried and it didn't work and they use change instead of fade and the point is 1:32:46 if we understand Fade to mean change color and stay changed but if you say 1:32:52 change it would change color but it might change back so it doesn't give the same answer if you change rather than paid it's very sensitive to the wording 1:32:59 um but that convinced me it really did understand and there's other things it's 1:33:05 done so there's a nice question that people came up with recently that many chatbots don't get 1:33:12 right and some people don't get right but gp4 gets right which is um so you 1:33:18 see I'm answering question does gp4 understand which has does have some relation to what you asked right 1:33:25 umis so the question goes like this Sally has three brothers each of her brothers has two 1:33:33 sisters how many sisters does Sally have and most chatbots get that 1:33:38 wrong um what about humans well I I just 1:33:44 gave a far side chat in Las Vegas and the interviewer asked me for an example of things that chatbots got got wrong so 1:33:51 I gave him this example and he said six and that was kind of embarrassing so 1:33:58 we will ask his name no just kidding no um so people get it wrong yeah 1:34:05 um but um I don't see how you can get that right without being able to do a 1:34:11 certain amount of reasoning it's got to sort of build a model and Andrew in has these examples where playing aell even 1:34:17 if you just give it strings as input it builds a model of the board internally 1:34:22 so I think they really do understand and to take that a step further is that 1:34:27 understanding cross the line into intelligence you said yes yeah I mean I accept the uring test 1:34:35 for intelligence people only started rejecting the churing test when we passed it so that that's the moving goal line 1:34:41 that I was talking about okay do you want to answer and I I want to quickly answer first of all also applaud you for 1:34:48 asking such a good question I'm gonna answer in addition to Jeff's because I 1:34:54 think what Jeff is trying to push is really how do we assess the the 1:34:59 fundamental intelligence level of these uh these uh um Big models but there are 1:35:05 a couple of other dimensions one is again Stanford uh hai's the center for 1:35:11 research of foundation model is creating these evaluation metrics right you're probably reading the P papers by Percy 1:35:18 Helm and all that um I think also so this technology is getting so deep that 1:35:24 some of the Benchmark is more messier than what you think the image net Benchmark for example in collaboration 1:35:31 with government now for example nist the US uh uh National Institute for standard 1:35:39 what's the T technology uh and the technology a testing or something you 1:35:44 know we need to start benchmarking against societally relevant issues not 1:35:50 just core fund fundamental capability one more thing I want to open your 1:35:56 aperture a little bit is that Beyond llms there are so many um um technology 1:36:04 towards the future of AI that we actually haven't build a good benchmarks 1:36:10 for yet I mean again my lab is doing some of the robotic learning one Google just released the paper yesterday on 1:36:17 robotic learning so so there is a lot more uh research coming up in this 1:36:23 space okay I know we have a lot of questions online I'm going to maybe take another few in the room and then maybe 1:36:28 someone from radical could read out a question or two from online um okay uh 1:36:34 in the room let's go for one that's not too far away from uh the last one here 1:36:40 just right here yeah yeah I here's mic 1:36:50 coming uh hello I'm visham uh and I'm a graduate student at University of Gulf 1:36:57 uh and I'm doing my thesis and Ai and agriculture so building upon something 1:37:02 you mentioned that universities don't have enough funding to train uh kind of 1:37:08 foundation models right so uh same question so I want to work in a in 1:37:14 agriculture I I'm passionate about it but I don't have enough resources to do that I might think of a very good 1:37:20 architect picture but I can't train it so uh maybe I can go to Industry then 1:37:26 pitch them the idea then I don't have the control over the idea I don't know uh how they're going to apply it so uh 1:37:34 do you have some advice on uh how to you know handle the situation start do a startup that's what 1:37:43 that's what we're here for oh sorry I'll let you answer if you can get your hands on an open source Foundation model you 1:37:50 can fine-tune one of those models with much less resources than it took to build the model so universities can 1:37:57 still do fine-tuning of those models that that's a very pragmatic answer for now but this is where we have 1:38:05 been really talking to the higher education leaders as well as policy makers invest in public sector we've got 1:38:13 to have National research Cloud I don't know if Canada has National research Cloud but we're pushing the us we need 1:38:21 need to bring in the the the the researchers like you to be able to 1:38:26 access the the national research Cloud but you do have an advantage by not 1:38:31 being a a company is that you have more opportunity to get your hands on unique 1:38:38 data sets data sets especially for public good and play up that card you 1:38:44 know you you could work with government agencies or communities and whatever 1:38:49 because public sector still has the trust you know and uh and take advantage 1:38:55 of that but for now yes fine tune on open source models okay thank you so 1:39:00 much okay we're going to take a couple questions we have uh thousands of PE people watching online uh watch parties 1:39:07 at Stanford and elsewhere so let's see if we can get a question from some people 1:39:16 online Leia's going to ask this question on behalf of someone online by the way she has done enormous amount of work to 1:39:22 make this happen along with Aaron brindle so thank you both thank 1:39:30 you all right thank you so we do have hundreds of AI researchers online and they're uh folks who are building AI 1:39:38 first companies and so the first most upvoted question was from Ben Saunders or Sanders uh he's currently CEO of an 1:39:45 AI startup and his colleague was actually student of Jeffrey hinton's in 2008 and uh he he has asked about 1:39:52 building responsibly and a lot of these questions have to do about building responsibly and they're thinking about 1:39:57 what measures can help them as teams be proper stewards for good versus bad and 1:40:02 uh what it actually means to be a steward great question 1:40:09 so responsible AI framework there's a lot of framework and I I I think somebody has estimate a few years ago 1:40:16 there were like 300 framework from State nation state all the way to corporate I 1:40:21 think it's really important for every company to build a a responsible 1:40:28 framework there is a lot you can borrow even radical is is uh is uh making uh 1:40:35 making one or or um and uh create the value framework that you you believe in 1:40:43 and recognize that AI product is a system so from the Upstream defining 1:40:49 problem data set dat Integrity uh you know your your your how you build models the the deployment and 1:40:57 create a multi-stakeholder ecosystem or multi-stakeholder um whatever team to 1:41:04 help you to build this responsible framework and uh also um create 1:41:11 Partnerships Partnerships with public sector like Academia like us partnership 1:41:16 with the civil society that worries about different dimensions from from privacy to uh bias to to this so so 1:41:25 really try to take both have a point of view as a company but also um be part of 1:41:32 the ecosystem and and partners with people who have this knowledge so that's 1:41:37 my that's my current suggestion I'll add to J no that was a much better answer 1:41:43 than I could have given so um I I'll just add a little bit I think to fa's point about working with people who are 1:41:49 interested in this I think um there are people in the investment Community who are who are thinking and 1:41:55 leading on this uh in our case uh radical we went into every single term sheet an obligation for the company to 1:42:02 adopt responsible AI um initially when we did that some of the lawyers who read 1:42:07 it were like what is this and tried to cross it out but we put it back in but we're also we've been working on a responsible AI investing framework that 1:42:13 we are going to release uh pretty broadly and we've done this in partnership with a number of different organizations around the world we've 1:42:21 looked we've met with 7,000 AI companies in the last four years and I think we've invested in about 40 so we've seen a lot 1:42:28 uh and try to build a framework that others can use going forward and we'll open source it so we can develop it and make it better but I think there's a lot 1:42:35 um that individual companies can do by just reaching out to others who are thinking in a like-minded way do you 1:42:41 want to ask another question yeah great there's so many questions so we'll only get to a couple of them unfortunately 1:42:46 but playing off of that uh a lot of these questions have to do with the relationship with with industry 1:42:51 considering how uh big of a role industry and the private sector is now playing in model development and some 1:42:57 folks are even asking uh should researchers and um different uh 1:43:02 engineering roles also be taking management courses 1:43:07 today sure I I have to tell you a story when I was 1:43:14 a Google um I managed a small group and we got reports every six months from the 1:43:21 people who worked for us and one of the reports I got was um Jeff is very nice 1:43:28 to work for um but he might benefit from taking a management 1:43:35 course but it then but then he wouldn't be Jeff that's how I feel about management 1:43:42 [Laughter] 1:43:48 courses I don't have a better story than that uh we have about a minute and a 1:43:53 half left so maybe let's do one more in the room if we can uh let's see do you 1:44:00 want take yeah uh no beside you 1:44:05 sorry all right well hopefully ask quickly and then we'll get a quick answer thank you and um it's a pleasure 1:44:12 to be here good to see you Fe my name's Elizabeth G I work at coher so my question is from a private sector 1:44:19 perspective we work with um Everybody um to take NLP large language models to um 1:44:27 the broader Society on the specific private U public sectors and research 1:44:32 institutions universities who has a lot of talent a lot of data what is the best 1:44:37 way to find the mutual kind of beneficial relationship that we can contribute and they can contribute thank 1:44:42 you um give them some money thank 1:44:49 you um or or 1:44:55 h100s that that we'll take h100 but look it's very important I I advocate for 1:45:01 public sector investment but I also actually probably more so advocate for partnership we need government private 1:45:08 sector and public sector to to work together so the past four years at 1:45:13 Stanford Hai this is one of the main things we have done is create a industry ecosystem and uh there's a lot of 1:45:21 details we can talk offline but if I'm talking to University leaders or higher 1:45:26 education is that I I think we need to embrace that we need to embrace that 1:45:32 responsibly some people will have different ways of calling it but I think this ecosystem is so important both 1:45:38 sides are important create that partnership be be you know the responsible partner for each other and 1:45:45 and resources is a big thing it it really um we would appreciate that 1:45:51 thank you okay with that we're exactly out of time um I want to thank you both 1:45:56 I I feel very privileged always to be able to call you both friends and faifa you a partner and Jeff you an investor 1:46:02 and have these conversations privately with you so it's great to get you both together and let other people uh hear 1:46:08 what you have to say so thank you both so much for doing this um hopefully it was as informative for you as it was for 1:46:19 me and we'll turn over to Melanie wooden 1:46:25 dean of arts and science at UV thank you so much Jordan so Jeff and Fay and 1:46:30 Jordan on behalf of everyone in the room tonight here in Mars and the thousands joining us online uh we are deeply 1:46:37 grateful for such a profound conversation this evening I can say and I think many of us 1:46:43 know that being part of a university community offers a NeverEnding set of 1:46:48 opportunities for engaging conversations and lectures and as the end of A Faculty of Arts and Science I have the pleasure 1:46:55 of attending many of them but I can say without reservation that tonight's conversation was truly 1:47:02 unparalleled and of course this conversation couldn't be more timely Jeff when you shared uh your concerns 1:47:09 with the world about the threats of super intelligence we all listened and we all 1:47:15 did what we could to try and understand this complex issue whether it's reading opinion pieces watching your video or or 1:47:22 reading long for journalism we really tried to understand uh what you were telling us so to hear directly from you 1:47:29 and from fa who spent so many years now um leading the way in human- centered AI 1:47:35 uh is really truly powerful so with that thank you both and thank you everyone here uh for attending uh this afternoon 1:47:42 and big thanks to radical venters and the other partners that made tonight possible and so with that the talk is 1:47:49 concluded and we invite those of you that are here with us in person to join us out in the foyer for some light PEOPLE IN THIS VIDEO Ami Bera M.D. U.S. Representative [D] California Don Beyer Jr. U.S. Representative [D] Virginia Suzanne Bonamici U.S. Representative [D] Oregon Greg Brockman Co-Founder and Chief Technology Officer OpenAI Mo Brooks U.S. Representative [R] Alabama Barbara Comstock U.S. Representative [R] Virginia Neal P. Dunn M.D. U.S. Representative [R] Florida Elizabeth Esty U.S. Representative [D] Connecticut Bill Foster U.S. Representative [D] Illinois Randy Hultgren U.S. Representative [R] Illinois Eddie Bernice Johnson U.S. Representative [D] Texas Steve Knight U.S. Representative [R] California Debbie Lesko U.S. Representative [R] Arizona Fei-Fei Li Co-Founder and AI4ALL Daniel Lipinski U.S. Representative [D] Illinois Zoe Lofgren U.S. Representative [D] California Frank D. Lucas U.S. Representative [R] Oklahoma Roger Marshall M.D. U.S. Representative [R] Kansas Thomas Massie U.S. Representative [R] Kentucky Jerry McNerney U.S. Representative [D] California Ralph Norman U.S. Representative [R] South Carolina Gary Palmer U.S. Representative [R] Alabama Timothy Persons Chief Scientist Government Accountability Office Dana Rohrabacher U.S. Representative [R] California Jacky Rosen U.S. Representative [D] Nevada Lamar S. Smith U.S. Representative [R] Texas Mark A. Takano U.S. Representative [D] California Paul Tonko U.S. Representative [D] New York Marc Veasey U.S. Representative [D] Texas Randy Weber Sr. U.S. Representative [R] Texas Dan Webster U.S. Representative (Former) [R] Florida Fewer People HOSTING ORGANIZATION House Science, Space, and Technology Subcommittee on Energy

No comments:

Post a Comment