News & Insights
We’re Making History in Real Time.
Our timely insights share informed perspectives on the rapidly evolving story of Election Technology, as it unfolds.
Election Lies and Dodgy Chatbots
To continue from last time, I’ll pivot from lying Chatbots to dodgy Chatbots. To be specific, I am still keying off of Garance’s fine article about public general-use Chatbots that are emitting falsehoods that deceive and harm voters. So, first let me be specific about lying Chatbots vs. dodgy Chatbots.…
Election Lies, Damned Lies, and Chatbots
With all due respect to the Associated Press (and apologies to Benjamin Disraeli for the irresistible title), I prefer a headline like, “Chatbots Lie to Voters About Elections” better than their recent headline, “Chatbots’ Inaccurate Misleading Responses About U.S. Elections Threaten to Keep Voters From Polls” produced by AP’s Emmy-winning global investigative journalist Garance Burke And that drove me to finally take a moment or three to comment on the substance of Garance’s article…
Who Should Make an Elections AI Service Agent? (Part 6)
In this final installment of how to build an NLA (a domain-specific or DS-NLA) — the how informing the who question (that we started with back in mid-Dec ‘23) — I focus on what may be the most overlooked set of questions about how a system should be built — not just for serving users, but supporting it’s operators…
Who Should Make an Elections AI Service Agent? (Part 5)
In the previous (4th) installment in this series, we pivoted from the question of who can or should build Chatbots, to the challenge of How to build a safe, low-tolerance, domain-specific natural language agent (NLA or “DS-NLA”). This time, we assume for the moment that challenge is tractable and explore the question, “What else is required, in addition to a safe base model?”…
Who Should Make a Voter AI Chatbot? (Part 4)
After an extended holiday break from my last installment on the question: “Who should make a Chatbot for voters?” — I’m back for the 4th installment in this series. This time, I’m pivoting from the Who question to the How question; and I have definitely pivoted from “Chatbot” to “domain-specific natural language agent” (NLA)…
Who Should Make a Voter AI Chatbot? (Part 3)
After two installments on the question: “Who should make a Chatbot for voters?” — we’ve come down to 3 observations:
Elections is an area of very low tolerance for inaccuracies, hallucinations, and repeating falsehoods.
It’s a terrible idea to build a so-called “lightweight Chatbot” app on top of the existing services powered by current LLMs from the AI tech-titans.
It’s not a good idea for any of those tech-titans to use their expertise and resources to tinker with their own LLM to be a specialized info service.
So then, given what needs to be built, we can finally consider who in the hec can (or should)…
Who Should Make a Voter AI Chatbot? (Part 2)
In the previous installment of this series, I gave a simple answer of “Nobody!” to the question of who should build a voter Chatbot. The reason was simple: the typical Chatbot is equally simple — and fatally flawed: a thin veneer of web (or App) user interface on top of an application programming interface (API) that connects over the Internet to a massive computing complex run by tech titans. and that’s only the beginning of the challenges…
Who Should Make a Voter AI Chatbot?
One of the side effects of the AI frenzy this past year is that lots of people are talking about the idea of having an AI-powered ChatBot for their favorite thing. Election-land is not immune to this desire. Like everywhere else, the idea is more-or-less similar: “Wouldn’t it be great if we could wave a magic wand and have an Oracle appear that is safe and reliable to answer any question about my favorite topic?” Well as the old saying goes, “Not so fast there, my friend”…
Towards an AI Research Agenda for Elections and Beyond (Part 3)
This is the 3rd of three posts of a 4-part series on responsible domain-specific AI research. Last time, John posted his 2nd part of this longer commentary about the AI research agenda that’s necessary for elections specifically, and a lot of government usage generally. Having explained the particular needs of use in election administration, this time he offers clarity about the needs for government computing and/or public-benefit generally.…
Towards an AI Research Agenda for Elections and Beyond (Part 2)
This is the 2nd of three posts of a 4-part series on responsible domain-specific AI research. Last time, John focused on a couple of prerequisite points relevant to the general idea. This time he delves more acutely into AI-driven “domain specific” natural language agents (NLAs), starting with usage in elections…
Towards an AI Research Agenda for Elections and Beyond (Part 1)
In this, first of three posts of a 4-part series, CTO John Sebes examines the AI needs in election technology. He focuses his remarks on text-based generative AI, the technology behind “chat-bots” and other kinds of natural language agents (NLAs). There’s plenty to say about AI more broadly, but NL AI is the tech that can meet important needs in human assistance, specifically NLAs that are “domain specific”…
What Judy Said; Seriously
When Judy Estrin speaks or writes about technology, it’s worth paying attention. But, I know what you’re thinking: “Whoa — more ‘OMG, AI blather’ that we’re drowning in?” In this case, yes; but stay with me. Estrin (and I) are pro-AI; it's just that to get to the promised land of beneficial use (especially in democracy administration) we need to look at the whole picture...