Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:Non story (Score 0) 2

A) Guess that's what the NHTSA investigation will determine (in this case, imagine there should be ample recorded data and video from Waymo to review). Depending on conditions, driving slower than the speed limit can be called for and not doing so can be deemed a violation under California's "Basic Speed Law."
 
B) Slower than a running person, perhaps, but a running person weighs several tons less a Waymo vehicle (classic physics demo). Physical therapists (and personal injury lawyers!) will tell you that even the slowest speed collisions can cause significant injuries.
 
Beyond the incident, I think the news here was Waymo's tone deaf PR offensive response. Back when I was a child, I was with a classmate who got knocked off his bike after being bumped by a car. I recall the driver was very comforting to my friend, not blaming him for being where he shouldn't have been and making her hit him. Nor did she later publish an article in a high-circulation newspaper boasting to the community that her driving skills were superior to theirs and suggesting that anyone else would have hurt the child considerably more. :-)

Submission + - Was Waymo Robotaxi Speeding Before It 'Made Contact with a Young Pedestrian'? 2

theodp writes: The self-congratulatory, yea-we-hit-the-kid-but-you-would-have-done-lots-worse tone of Waymo's blog post response to its Waymo robotaxi hitting a child near an elementary School in Santa Monica seemed a bit tone deaf, even more so as commenters pointed out and Google Maps images appeared to confirm that the posted speed limit around Grant Elementary School in Santa Monica is 15 mph (Google Maps link, screenshot) when children are present and Waymo self-reported that the robotaxi's speed was "approximately 17 mph" when it spotted the "young pedestrian" and "braked hard" to reduce the car's speed "to under 6 mph before contact was made." Waymo did not mention what the speed limit was in its self-described ‘transparent’ blog disclosure.

Not that going 17 mph in a 15 mph zone is the stuff of street drag racing, but it's at odds with the attaboy Waymo gave itself for softening the blow to the child as well as an earlier Waymo blog post that boasted "the Waymo Driver is always alert, respects speed limits."

From a National Highway Traffic Safety Administration report on the incident: "NHTSA is aware that the incident occurred within two blocks of a Santa Monica, CA elementary school [a Jan. 23rd police call report puts the location as the 2400 block of Pearl St.] during normal school drop off hours; that there were other children, a crossing guard, and several double-parked vehicles in the vicinity; and that the child ran across the street from behind a double parked SUV towards the school and was struck by the Waymo AV. Waymo reported that the child sustained minor injuries. [...] ODI [Office of Defect Investigation] has opened this Preliminary Evaluation to investigate whether the Waymo AV exercised appropriate caution given, among other things, its proximity to the elementary school during drop off hours, and the presence of young pedestrians and other potential vulnerable road users. ODI expects that its investigation will examine the ADS’s intended behavior in school zones and neighboring areas, especially during normal school pick up/drop off times, including but not limited to its adherence to posted speed limits. ODI will also investigate Waymo's post-impact response."

Submission + - Grace Hopper Conference Seeks $1M Corporate Partnership Tier 'Title Sponsor'

theodp writes: Among the corporate partnership opportunities listed in the Grace Hopper Celebration 2026 Prospectus is a new offering for a 'Title Sponsor', priced at a cool $1,000,000. So, what does a million bucks buy you? According to the prospectus: "Inclusion in all in-person and virtual event marketing including social, email and website * Booth space in Career Expo and Tech Expo * 3 Mainstage Sessions * Exclusive Platinum Event App Package * Logo on: Sidebar Navigation, Day-At-A-Glance Screen, Authentication Screen, Splash Screen * Interstitial Ad * 3D Map Booth Logo * Up to 90-second pre-recorded speaking opportunity to be shown at opening or closing session."

With more than 30,000 attendees and 600 speakers, GHC is the world's largest gathering for women in technology. It is run by the Anita Borg Institute For Women And Technology, which announced a reduction in force affecting a significant proportion of their workforce in 2024 amid downturns in corporate investments in DEI efforts and an overall reduction in giving and philanthropy.

Submission + - Code.org Lays Off 18 Employees "To Ensure Long-Term Sustainability"

theodp writes: Tech-backed K-12 CS+AI education nonprofit Code.org (revenue) confirmed that it has laid off 18 employees, or about 14% of its staff. Following the cuts, Code.org’s staff now numbers 107. "Code.org has made the difficult decision to part ways with 18 colleagues as part of efforts to ensure our long-term sustainability," the organization said in an emailed statement. "Their contributions helped millions of educators and students around the world, and we are grateful for their efforts."

Launched in 2013 with a mission to expand computer science education to K-12 students, the organization partnered with donors Microsoft, Google, and Amazon in December to "switch hats" in a pivot from coding to include AI literacy, replacing its flagship annual event the Hour of Code with the new Hour of AI.

Submission + - The Microsoft-OpenAI Files 1

theodp writes: GeekWire takes a look at AI’s defining alliance in The Microsoft-OpenAI Files, an epic story drawn from 200+ documents, many made public Friday in Elon Musk’s ongoing suit accusing OpenAI and its CEO Sam Altman of abandoning the nonprofit mission (Microsoft is also a defendant). Musk, who was an OpenAI co-founder, is seeking up to $134 billion in damages.

Previously undisclosed emails, messages, slide decks, reports, and deposition transcripts reveal how Microsoft pursued, rebuffed and backed OpenAI at various moments over the past decade, ultimately shaping the course of the lab that launched the generative AI era. The latest round of documents, filed as exhibits in Musk’s lawsuit, show how Nadella and Microsoft’s senior leadership team rally in a crisis, maneuver against rivals such as Google and Amazon, and talk about deals in private.

Even though Microsoft didn’t have a seat on the OpenAI board, text messages between Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman following Altman's firing as CEO in Nov. 2023 (news of which sent Microsoft's stock plummeting), revealed in the latest filings, show just how influential Microsoft was. A day after Altman's firing, Nadella sent Altman a detailed message from Brad Smith, Microsoft’s president and top lawyer, explaining that Microsoft had created a new subsidiary called Microsoft RAI (Responsible Artificial Intelligence) Inc. from scratch — legal work done, papers ready to file as soon as the WA Secretary of State opened Monday morning — and was ready to capitalize and operationalize it to 'support Sam in whatever way is needed,' including absorbing the OpenAI team at a calculated cost of roughly $25 billion. (Altman’s reply: "kk"). Just days later, as he planned his return as CEO to the now-reeling-from-Microsoft-punches nonprofit, Altman joined Microsoft's Nadella, Smith, and Kevin Scott in a text messaging thread in which the four vetted prospective board members to replace those who had ousted Altman. Later that night, OpenAI announced Altman’s return with the newly constituted board.

If you like stories with happy Microsoft endings, as part of an agreement clearing the way for OpenAI to restructure as a for-profit business, Microsoft in October received a 27% ownership stake in OpenAI worth approximately $135 billion and retains access to the AI startup's technology until 2032, including models that achieve AGI.

Submission + - Conference Table or Multi-Person Workstation Desk as a Dining Room Table?

theodp writes: While a house or apartment with a separate 'formal' dining room/area in addition to casual dining space is a nice-to-have, you may find you only use the space occasionally for get-togethers with family and friends. If you work from home or have kids who need a place to do their homework, have you eschewed or considered eschewing a traditional dining table for a conference room table or multi-person workstation desk that can do double-duty as your everyday workspace and occasional dining room table? If so, care to share how that worked out you (aesthetically and functionally) and tips for what to consider (height, width, finish, power outlets, chair types, cost, etc.)?

Submission + - Microsoft Elevate for Educators Launches with 'AI Merit Badges' for Educators

theodp writes: Just days after Microsoft President Brad Smith moved to deflect White House criticism over AI Data Centers' insatiable demand for electricity (and electricians) driving Americans' utility bills higher, Microsoft announced Microsoft Elevate for Educators, "a program connecting educators with community, professional development, and AI tools to transform teaching" in an effort to "empower every school, educator, and student to thrive with confidence in an AI-powered future."

Towards that end, Microsoft is offering new AI-powered tools "to help schools worldwide prepare educators and students for an AI-driven future" and is also seeking to credential educators with its new Microsoft Elevate Educator Credential, as well as a new Microsoft Certified Instructional Technologist and Coach certification.

Taking a page from the Girl Scouts playbook, Microsoft is encouraging teachers to pursue the Microsoft Elevate Educator pathway (from "Explorer" to "Expert" to "Fellow"), leading to recognition "for their exceptional use of Microsoft tools and resources to enhance teaching and learning experiences." And there's also a Microsoft Elevate School journey, which leads from "Pathfinder" to "Showcase" to "Beacon." Hey, be true to your AI school!

Submission + - Code.org: Use AI in an Interview Without Our OK and You're Dead to Us

theodp writes: Code.org, the nonprofit backed by AI giants Microsoft, Google, and Amazon whose Hour of AI and free AI curriculum aim to make world's K-12 schoolchildren AI literate, points job seekers to its AI Use Policy in Hiring, which promises dire consequences for those who use AI during interviews or take home assignments without its OK.

Explaining "What’s Not Okay," Code.org writes: "While we support thoughtful use of AI, certain uses undermine fairness and honesty in the hiring process. We ask that candidates do not [...] use AI during interviews and take-home assignments without explicit consent from the interview team. Such use goes against our values of integrity and transparency and will result in disqualification from the hiring process."

Interestingly, Code.org CEO Partovi last year faced some blowback from educators over his LinkedIn post that painted schools that police AI use by students as dinosaurs. Partovi wrote, "Schools of the past define AI use as 'cheating.' Schools of the future define AI skills as the new literacy. Every desk-job employer is looking to hire workers who are adept at AI. Employers want the students who are best at this new form of 'cheating.'"

Submission + - LEGO Education Announces CS+AI K-8 Classroom Packs Priced at $2,049-$3,179

theodp writes: Offering a new report as evidence that K-8 teachers see benefits of hands-on computer science and AI education but lack the right tools to engage students, LEGO Education on Monday announced its Hands-on Computer Science & AI Learning Solution for children in grades K-8.

From the press release: "Today, LEGO® Education announced a new hands-on solution and curriculum for computer science and artificial intelligence (AI) for K-8 classrooms that fosters collaboration, creativity, and learning outcomes. Shipping from April 2026, LEGO® Education Computer Science & AI enables schools and districts to expand critically needed access to computer science and AI education." The offerings include Computer Science & AI Kits for 24 students priced at $2,049 for grades K-2, $2,579 for grades 3-5, and $3,179 for grades 6-8.

Not to be outdone, Amazon on Monday announced it's bringing PartyRock — its no-code approach to AI creation — into the classroom to promote AI literacy in support of the White House’s AI education initiatives. "Rather than focusing on the mechanics of AI programming," Amazon explains, "PartyRock emphasizes creative problem-solving and conceptual understanding. Students articulate their ideas through natural language descriptions, and the playground transforms these descriptions into functional applications. This approach shifts the educational focus from syntax and coding structures to the more fundamental questions of what AI can do and how it can be directed to solve problems."

Submission + - Should Real-World Examples be Required for Standards and Other Mandates?

theodp writes: If someone wants to impose standards, forms, documentation requirements, and other mandates on others, it seems only fair that they should be able to — and required to — demonstrate it in action first, right? Without real-world examples of what is considered 'good', people are essentially asked to sign off on a black box without a clear idea of what is being demanded, how much work it may entail, and in the end how worthwhile it even may be.

Surprisingly, that's not how things tend to play out in practice in industry, academia, and other organizations. A case in point is the proposed new Computer Science + AI Standards for pre-kindergarten to high school students assembled by a consortium of educators, tech-backed nonprofits, and tech industry advisors that aims to shape how CS+AI is taught in classrooms. A Friday morning LinkedIn post from the Computer Science Teachers Association reminds educators that they have 72 hours to "help us improve them [the standards] by reviewing and completing our feedback form by 9am ET on Monday, January 12."

Under development since 2023, the 247-page standards document is chock full of students-should-be-able-to pronouncements for all grade levels but offers no concrete examples of what that looks like in practice in terms of acceptable student deliverables or teacher lesson plans — e.g., "Students should be able to create a functional, rule-based AI for a Non-Playable Character (NPC) using programming or visual scripting. Students’ implementation must be based on a recognized AI method (e.g., finite-state machine, behavior tree)."

As Ross Perot once said, the devil is in the details. So, in a world where more and more people specialize in governance, risk, and compliance jobs that involve specifying mandates for others to comply with, shouldn't it be a red flag if they can't show real-world examples of how to satisfy those mandates? If you require it, shouldn't you be able to demonstrate it? Otherwise, doesn't it signal that the mandate hasn’t been validated? And open the door to being told “that’s not what I meant” for those left to guess at what was meant?

Submission + - The GeekWire Stories that Defined 2025 (Spoiler Alert: AI Dominated)

theodp writes: In a year-end podcast, GeekWire looks back at the stories that defined 2025, with the "Most Popular" award going to Coding is dead: UW computer science program rethinks curriculum for the AI era.

Not too surprisingly, AI dominated 2025's headlines. Mandates from tech company leaders to use AI — but with no playbook on how — are creating worker stress, prompting one tech veteran to comment on the brutality of tech cycles: "The challenge, and opportunity for leadership, is whether the [AI] bets actually compound into something durable, or just become another slide deck for next year’s reorg."

GeekWire notes that Microsoft President Brad Smith offered his own evidence to investors that AI-is-real at Microsoft's Annual Shareholder Meeting in December, explaining that he asked Copilot’s Researcher Agent earlier in the day to produce a report on an issue from seven or eight years ago, and it generated a 25-page report with 100 citations that so wowed his colleagues that they clamored for him to share the prompt he used to produce it so they could all learn how to use AI more effectively. While Smith didn't share either the report or prompt in the webcast), the anecdote alone had his fellow Microsoft execs nodding and smiling in amazement (GeekWire couldn't resist wondering aloud how many of the recipients used their AI agents to summarize the 25-page report rather than having to actually read it).

Submission + - Ready, Fire, Aim: As Schools Embrace AI, Skeptics Raise Concerns

theodp writes: "Fueled partly by American tech companies, governments around the globe are racing to deploy generative A.I. systems and training in schools and universities," reports the NY Times. "In early November, Microsoft said it would supply artificial intelligence tools and training to more than 200,000 students and educators in the United Arab Emirates. Days later, a financial services company in Kazakhstan announced an agreement with OpenAI to provide ChatGPT Edu, a service for schools and universities, for 165,000 educators in Kazakhstan. Last month, xAI, Elon Musk’s artificial intelligence company, announced an even bigger project with El Salvador: developing an A.I. tutoring system, using the company’s Grok chatbot, for more than a million students in thousands of schools there."

"In the United States, where states and school districts typically decide what to teach, some prominent school systems recently introduced popular chatbots for teaching and learning. In Florida alone, Miami-Dade County Public Schools, the nation’s third-largest school system, rolled out Google’s Gemini chatbot for more than 100,000 high school students. And Broward County Public Schools, the nation’s sixth-biggest school district, introduced Microsoft’s Copilot chatbot for thousands of teachers and staff members."

"Teachers currently have few rigorous studies to guide generative A.I. use in schools. Researchers are just beginning to follow the long-term effects of A.I. chatbots on teenagers and schoolchildren. 'Lots of institutions are trying A.I.,' said Drew Bent, the education lead at Anthropic. 'We’re at a point now where we need to make sure that these things are backed by outcomes and figure out what’s working and what’s not working.'"

Slashdot Top Deals

Klein bottle for rent -- inquire within.

Working...