Connect with us

Tech

Semron wants to replace chip transistors with ‘memcapacitors’

Published

on


A new Germany-based startup, Semron, is developing what it describes as “3D-scaled” chips to run AI models locally on smartphones, earbuds, VR headsets and other mobile devices

Co-created by Kai-Uwe Demasius and Aron Kirschen, engineering graduates from the Dresden University of Technology, Semron’s chips use electrical fields to perform calculations instead of electrical currents — the medium of conventional processors. This enables the chips to achieve higher energy efficiency while keeping the fabrication costs to produce them down, Kirschen claims.

“Due to an expected shortage in AI compute resources, many companies with a business model that rely on access to such capabilities risk their existence — for example, large startups that train their own models,” Kirschen told TechCrunch in an email interview. “The unique features of our technology will enable us to hit the price point of today’s chips for consumer electronics devices even though our chips are capable of running advanced AI, which others are not.”

Semron’s chips — for which Demasius and Kirschen filed an initial patent in 2016, four years before they founded Semron — tap a somewhat unusual component known as a “memcapacitor,” or a capacitor with memory, to run computations. The majority of computer chips are made of transistors, which unlike capacitors can’t store energy; they merely act like “on/off” switches, either letting an electric current through or stopping it.

Semron’s memcapacitors, made out of conventional semiconductor materials, work by exploiting a principle known in chemistry as charge shielding. The memcapacitors control an electric field between a top electrode and bottom electrode via a “shielding layer.” The shielding layer, in turn, is controlled by the chip’s memory, which can store the different “weights” of an AI model. (Weights essentially act like knobs in a model, manipulating and fine-tuning its performance as it trains on and processes data.)

The electric field approach minimizes the movement of electrons at the chip level, reducing energy usage — and heat. Semron aims to leverage the heat-reducing properties of the electric field to place as as many as hundreds of layers of memscapacitors on a single chip — greatly increasing compute capacity.

Semron

A schematic showing Semron’s 3D AI chip design.

“We use this property as an enabler to deploy several hundred times the compute resources on a fixed silicon area,” Kirschen added. “Think of it like hundreds of chips in one package.”

In a 2021 study published in the journal Nature Electronics, researchers at Semron and the Max Planck Institute of Microstructure Physics successfully trained a computer vision model at energy efficiencies of over 3,500 TOPS/W — 35 to 300 times higher than existing techniques. TOPS/W is a bit of a vague metric, but the takeaway is that memcapacitors can lead to dramatic energy consumption reductions while training AI models.

Now, it’s early days for Semron, which Kirschen says is in the “pre-product” stage and has “negligible” revenue to show for it. Often the toughest part of ramping up a chip startup is mass manufacturing and attaining a meaningful customer base — albeit not necessarily in that order.

Making matters more difficult for Semron is the fact that it has stiff competition in custom chip ventures like Kneron, EnCharge and Tenstorrent, which have collectively raised tens of millions of dollars in venture capital. EnCharge, like Semron, is designing computer chips that use capacitors rather than transistors, but using a different substrate architecture.

Semron, however — which has an 11-person workforce that it’s planning to grow by around 25 people by the end of the year — has managed to attract funding from investors including Join Capital, SquareOne, OTB Ventures and Onsight Ventures. To date, the startup has raised 10 million euro (~$10.81 million).

Said SquareOne partner Georg Stockinger via email:

“Computing resources will become the ‘oil’ of the 21st century. With infrastructure-hungry large language models conquering the world and Moore’s law reaching the limits of physics, a massive bottleneck in computing resources will shape the years to come. Insufficient access to computing infrastructure will greatly slow down productivity and competitiveness both of companies and entire nation-states. Semron will be a key element in solving this problem by providing a revolutionary new chip that is inherently specialized on computing AI models. It breaks with the traditional transistor-based computing paradigm and reduces costs and energy consumption for a given computing task by at least 20x.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Are You Safe? Out-of-Control Satellite Set To Crash Into Earth Imminently

Published

on

By



NASA has estimated a slim chance of one in 2,500 that a satellite, known as ERS-2, could land on someone’s head upon reentry into Earth’s atmosphere.

Expected to disintegrate into multiple pieces during reentry, most of the satellite’s components are projected to burn up. The European Space Agency (ESA) has described ERS-2’s reentry as ‘natural,’ indicating their lack of control over the satellite.

The primary force affecting the satellite’s descent is atmospheric drag, which can be influenced by unpredictable solar activity. Despite efforts to forecast its reentry window within a few days, pinpointing the exact time and location remains elusive until its final orbits. However, as the reentry date approaches, estimates become more precise.

The ESA recently released photos of the satellite hurtling towards Earth’s atmosphere, captured between January 14th and February 3rd, with ERS-2’s altitude exceeding 300km. Presently, it maintains an altitude of approximately 200km, descending over 10km daily at an accelerating rate.

The ESA anticipates the satellite’s reentry into Earth’s atmosphere around 7:10 AM EST or 4:10 AM PST on Wednesday, February 21st, although this prediction carries a staggering margin of error of 26 hours.

Upon reaching an altitude of about 80km, ERS-2 is expected to disintegrate and burn up, with some fragments potentially reaching Earth, most likely landing in the ocean. Launched in 1995, the ERS-2 mission garnered significant attention at the time.

About ERS-2

The ERS-2 satellite, launched in 1995, was a pioneering Earth-observing satellite developed by Europe. Alongside its twin, ERS-1, it provided crucial data on polar caps, oceans, and land surfaces, aiding in disaster monitoring. Despite its operational end in 2011, the satellite’s contributions continue, with its controlled deorbiting ensuring a safe descent, and minimizing space debris. The risk of injury from such debris remains extremely low, reassuringly less than 1 in 100 billion annually, vastly lower than everyday risks.

SEE ALSO: Apple Smart Ring Could Be Coming Soon, Latest Patent Hints At Imminent Launch





Source link

Continue Reading

Tech

Women in AI: Rashida Richardson, senior counsel at Mastercard focusing on AI and privacy

Published

on

By


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Rashida Richardson is senior counsel at Mastercard, where her purview lies with legal issues relating to privacy and data protection in addition to AI.

Formerly the director of policy research at the AI Now Institute, the research institute studying the social implications of AI, and a senior policy advisor for data and democracy at the White House Office of Science and Technology Policy, Richardson has been an assistant professor of law and political science at Northeastern University since 2021. There, she specializes in race and emerging technologies.

Rashida Richardson, senior counsel, AI at Mastercard

Briefly, how did you get your start in AI? What attracted you to the field?

My background is as a civil rights attorney, where I worked on a range of issues including privacy, surveillance, school desegregation, fair housing and criminal justice reform. While working on these issues, I witnessed the early stages of government adoption and experimentation with AI-based technologies. In some cases, the risks and concerns were apparent, and I helped lead a number of technology policy efforts in New York State and City to create greater oversight, evaluation or other safeguards. In other cases, I was inherently skeptical of the benefits or efficacy claims of AI-related solutions, especially those marketed to solve or mitigate structural issues like school desegregation or fair housing.

My prior experience also made me hyper-aware of existing policy and regulatory gaps. I quickly noticed that there were few people in the AI space with my background and experience, or offering the analysis and potential interventions I was developing in my policy advocacy and academic work. So I realized this was a field and space where I could make meaningful contributions and also build on my prior experience in unique ways.

I decided to focus both my legal practice and academic work on AI, specifically policy and legal issues concerning their development and use.

What work are you most proud of in the AI field?

I’m happy that the issue is finally receiving more attention from all stakeholders, but especially policymakers. There’s a long history in the United States of the law playing catch-up or never adequately addressing technology policy issues, and five-six years ago, it felt like that may be the fate of AI, because I remember engaging with policymakers, both in formal settings like U.S. Senate hearings or educational forums, and most policymakers treated the issue as arcane or something that didn’t require urgency despite the rapid adoption of AI across sectors. Yet, in the past year or so, there’s been a significant shift such that AI is a constant feature of public discourse and policymakers better appreciate the stakes and need for informed action. I also think stakeholders across all sectors, including industry, recognize that AI poses unique benefits and risks that may not be resolved through conventional practices, so there’s more acknowledgement — or at least appreciation — for policy interventions.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

As a Black woman, I’m used to being a minority in many spaces, and while the AI and tech industries are extremely homogeneous fields, they’re not novel or that different from other fields of immense power and wealth, like finance and the legal profession. So I think my prior work and lived experience helped prepare me for this industry, because I’m hyper-aware of preconceptions I may have to overcome and challenging dynamics I’ll likely encounter. I rely on my experience to navigate, because I have a unique background and perspective having worked on AI in all industries — academia, industry, government and civil society.

What are some issues AI users should be aware of?

Two key issues AI users should be aware of are: (1) greater comprehension of the capabilities and limitations of different AI applications and models, and (2) how there’s great uncertainty regarding the ability of current and prospective laws to resolve conflict or certain concerns regarding AI use.

On the first point, there’s an imbalance in public discourse and understanding regarding the benefits and potential of AI applications and their actual capabilities and limitations. This issue is compounded by the fact that AI users may not appreciate the difference between AI applications and models. Public awareness of AI grew with the release of ChatGPT and other commercially available generative AI systems, but those AI models are distinct from other types of AI models that consumers have engaged with for years, like recommendation systems. When the conversation about AI is muddled — where the technology is treated as monolithic — it tends to distort public understanding of what each type of application or model can actually do, and the risks associated with their limitations or shortcomings.

On the second point, law and policy regarding AI development and use is evolving. While there are a variety of laws (e.g. civil rights, consumer protection, competition, fair lending) that already apply to AI use, we’re in the early stages of seeing how these laws will be enforced and interpreted. We’re also in the early stages of policy development that’s specifically tailored for AI — but what I’ve noticed both from legal practice and my research is that there are areas that remain unresolved by this legal patchwork and will only be resolved when there’s more litigation involving AI development and use. Generally, I don’t think there’s great understanding of the current status of the law and AI, and how legal uncertainty regarding key issues like liability can mean that certain risks, harms and disputes may remain unsettled until years of litigation between businesses or between regulators and companies produce legal precedent that may provide some clarity.

What is the best way to responsibly build AI?

The challenge with building AI responsibly is that many of the underlying pillars of responsible AI, such as fairness and safety, are based on normative values — of which there are no shared definitions or understanding of these concepts. So one could presumably act responsibly and still cause harm, or one could act maliciously and rely on the fact that there are no shared norms of these concepts to claim good-faith action. Until there are global standards or some shared framework of what is meant to responsibly build AI, the best way one can pursue this goal is to have clear principles, policies, guidance and standards for responsible AI development and use that are enforced through internal oversight, benchmarking and other governance practices.

How can investors better push for responsible AI?

Investors can do a better job at defining or at least clarifying what constitutes responsible AI development or use, and taking action when AI actor’s practices do not align. Currently, “responsible” or “trustworthy” AI are effectively marketing terms because there are no clear standards to evaluate AI actor practices. While some nascent regulations like the EU AI Act will establish some governance and oversight requirements, there are still areas where AI actors can be incentivized by investors to develop better practices that center human values or societal good. However, if investors are unwilling to act when there is misalignment or evidence of bad actors, then there will be little incentive to adjust behavior or practices.



Source link

Continue Reading

Tech

House punts on AI with directionless new task force

Published

on

By


The House of Representatives has founded a Task Force on artificial intelligence that will “ensure America continues leading in this strategic area,” as Speaker Mike Johnson put it. But the announcement feels more like a punt after years of indecision that show no sign of ending.

In a way this task force — chaired by California Reps Ted Lieu and Jay Obernolte — is a welcome sign of Congress doing something, anything, on an important topic that has become the darling of tech investment. But in another, more important way, it comes off as lip service at a time many feel AI and tech are running circles around regulators and lawmakers.

Furthermore, the dispiriting partisanship and obstruction on display every day in Congress renders quaint any notion that this task force would produce anything of value at any time, let alone during a historically divisive election year.

“As new innovations in AI continue to emerge, Congress and our partners in federal government must keep up. House Republicans and Democrats will work together to create a comprehensive report detailing the regulatory standards and congressional actions needed to both protect consumers and foster continued investment and innovation in AI,” said Rep. Obernolte in the announcement.

And Rep. Lieu: “AI has the capability of changing our lives as we know it. The question is how to ensure AI benefits society instead of harming us. As a recovering Computer Science major, I know this will not be an easy or quick or one-time task, but I believe Congress has an essential role to play in the future of AI. I have been heartened to see so many Members of Congress of all political persuasions agree.”

Of course, the White House, numerous agencies, the EU and countless other authorities and organizations are already issuing “comprehensive reports” and recommending legislative actions, but what’s one more?

It seems as though Congress realized that it was the last substantive entity to act on this industry-reshaping force, and so representatives reached across the aisle to pat each other on the back for taking the smallest possible step toward future legislation.

But at the same time, with Congress dysfunctional (having passed a historically low number of bills) and all eyes on the 2024 presidential election, this task force is just a way of kicking the can down the road until they know what they can get away with under the coming administration.

Certainly studying AI and its risks and benefits is not a bad thing — but it’s a little late in the day to be announcing it. This task force is long overdue, and as such we may welcome it but also treat it with the same skepticism that lawmakers pandering deserves.

Everyone involved with this will point to it when asked why they haven’t acted on AI, which many voters fear is coming for their jobs or automating processes that once had a purposeful human touch. “But we started this task force!” Yes, and the EU has had their task force working on this subject since the pandemic days.

The announcement of the task force kept expectations low, with no timeline or deliverables that voters or watchdogs can hold them to. Even the report is something they will only “seek” to produce!

Furthermore, considering the expert agencies are at risk of declawing via Supreme Court decision, it is hard to even imagine what a regulatory structure would look like a year from now. Want the FTC, FCC, SEC, EPA or anyone else to help out? They may be judicially restrained from doing so come 2025.

Perhaps this task force is Congress’s admission that during such tumultuous times, and lacking any real insight into an issue, all they can do is say “we’ll look into it.”



Source link

Continue Reading

Trending

Copyright © 2023 Dailycrunch. & Managed by Shade Marketing & PR Agency