Radar Trends to Watch: February 2024
Developments in AI, Programming, Web, and More
2024 started with yet more AI: a small language model from Microsoft, a new (but unnamed) model from Meta that competes with GPT-4, and a text-to-video model from Google that claims to be more realistic than anything yet. Research into security issues has also progressed—unfortunately, discovering more problems than solutions. A common thread in several recent attacks has been to use embeddings: an attacker discovers innocuous text or images that happen to have an embedding similar to words describing actions that aren’t allowed. These innocuous inputs easily get by filters designed to prevent hostile prompts.
AI
- Merging large language models gets developers the best of many worlds: use different models to solve different kinds of problems. It’s essentially mixture of experts but applied at the application level of the stack rather than the model level.
- Researchers have developed a method for detecting AI-generated text that is 90% accurate and has a false positive rate of only 0.01%.
- Google has announced Lumiere, a text-to-video model that generates “realistic, diverse, and coherent” motion. Lumiere generates the entire video in one pass rather than generating distinct keyframes that are then merged.
- Is JavaScript a useful language for developing artificial intelligence applications? The New Stack lists five tools for building AI applications in JavaScript, starting with TensorFlow.js.
- Meta has released a new language model that claims performance similar to GPT-4. It is a self-rewarding language model; it continually evaluates its responses to prompts and adjusts its parameters in response. An independent open source implementation is already on GitHub.
- Hospitals are using federated learning techniques to collect and share patient data without compromising privacy. With federated learning, the hospitals aren’t sharing actual patient data but machine learning models built on local data.
- Researchers have discovered “compositional attacks” against multimodal language models. In these attacks, prompts that combine text and images are used to “jailbreak” the model. A hostile but benign-looking image establishes a context in which the model ignores its guardrails.
- Researchers have used tests for psychologically profiling humans to profile AI models and research their built-in biases and prejudices.
- Direct Preference Optimization (DPO) is an algorithm for training language models to operate in agreement with human preferences. It is simpler and more efficient than RLHF.
- Mistral has published a paper describing its Mixtral 8x7B model, a mixture of experts model with very impressive performance.
- Volkswagen has added ChatGPT to the infotainment system on its cars. ChatGPT will not have access to any of the car’s data.
- Language models rely on converting input tokens to embeddings (long sequences of numbers). Can the original text be recovered from the embeddings used with language models? The answer may be yes.
- AWS’s AI product, Q, now has tools to automate updating Java programs to new versions. That includes finding and replacing deprecated dependencies.
- Microsoft’s Phi-2 model is now open source; it has been relicensed with the MIT license. Phi-2 is a small model (2.7B parameters) with performance comparable to much larger models.
- Simon Willison’s summary of AI in 2023 is the best we’ve seen. In the coming year, Simon would love to see us get beyond “vibes-based development.” Unlike traditional programming, AI doesn’t do what you tell it to do, and we’re frequently forced to evaluate AI output on the basis of whether it “feels right.”
- The US FTC has issued a challenge to developers: develop software that can detect AI-generated clones of human voices. The winner will receive a $25,000 prize.
- DeepMind has built a model that can solve geometry problems. The new model combines a language model with symbolic AI, giving it the ability to reason logically about problems in addition to matching patterns.
Programming
- Any app can become extensible. Extism is a WebAssembly library that can be added to almost any app that allows app users to write plug-ins in most major programming languages.
- Zed, a collaborative code editor, is now open source and available on GitHub.
- A study by GitHub shows that creating a good developer experience (DevEx or DX) improves productivity by reducing cognitive load, shortening feedback loops, and helping developers to remain in “flow state.”
- Julia Evans (@b0rk@jvns.ca) has compiled a list of common Git mistakes.
- Ruffle is a Flash emulator built with Rust and Wasm. While you may not remember Macromedia Flash, and you probably don’t want to use it for new content, the New York Times is using Ruffle to resurrect archival content that used Flash for visualizations.
- JavaScript as a shell language? Bun is an open source JavaScript shell that can run on Linux, macOS, and Windows. It’s the only shell that is truly platform-independent.
- Shadeup is a new programming language that extends TypeScript. It is designed to simplify working with WebGPU.
- “Rethinking Observability” argues for thinking about how users experience a service, rather than details of the service’s implementation. What are the critical user journeys (CUJs), and what are service level objectives (SLOs) for those paths through the system?
- Marimo is a new Python notebook with some important features. When you edit any cell, it automatically updates all affected cells; the notebooks themselves are pure Python and can be managed with Git and other tools; GitHub Copilot is integrated into the Marimo editor.
- LinkedIn has released its Developer Productivity and Happiness Framework, a set of metrics for processes that affect developer experience. The metrics include things like code review response time, but LinkedIn points out that the framework is most useful in helping teams build their own metrics.
- The Node package registry, NPM, recently accepted a package named “everything” that links to everything in the registry. Whether this was a joke or a hostile attack remains to be seen, but an important side effect is that it became impossible to remove a package from NPM.
- container2wasm takes a container image and converts it to WebAssembly, The Wasm executable can be run with WASI or even in a browser. This project is still in its early stages, but it is very impressive.
- The AHA Stack provides a way to build web applications that minimizes browser-side JavaScript. It is based on the Astro framework, htmx, and Alpine.js.
- Last year ended with Brainfuck implemented in PostScript. To start 2024, someone has found a working Lisp interpreter written in Malbolge, a language that competes with Brainfuck for being the most difficult, frustrating, and obtuse programming language in existence.
- The year starts with a new Python web framework, Microdot. How long has it been since we’ve had a new Python framework? It’s very similar to Flask, but it’s small; it was designed to run on MicroPython, which runs on microcontrollers like ESP8266.
- Odin is yet another new programming language. It supports data-oriented programming and promises high performance with explicit (though safe) control of memory management and layout. It claims simplicity, clarity, and readability.
Security
- The UK’s National Cyber Security Center has warned that generative AI will be used in ransomware and other attacks. Generative AI will make social engineering and phishing more convincing; it will enable inexperienced actors to create much more dangerous attacks.
- A presentation at USENIX’s security symposium argues that side channels leak information in almost all commodity PCs: microphones, cameras, and other sensors pick up electromagnetic signals from the processor. These signals can be captured and decoded.
- Like everyone else, malware groups are moving to memory-safe languages like Rust and DLang to develop their payloads.
- Researchers have discovered that poisoned training data can be used to insert backdoors into large language models. These backdoors can be triggered by special prompts and cannot be discovered or removed by current safety techniques.
- Programmers who use AI assistants are likely to write code that is less secure while believing that their code is more secure. However, users of AI assistants who don’t “trust” the AI engage more with the code produced and are likely to produce code that is more secure.
- A variant of the Mirai malware is attacking Linux systems. This variant finds weak SSH passwords and installs cryptocurrency mining software to create a mining botnet.
- Many groups offer “bug bounties” that pay rewards to those who discover bugs (particularly security vulnerabilities) in their code. One open source maintainer argues that this process is being distorted by incorrect bug reports that are generated by AI, wasting maintainers’ time.
- The US National Institute of Standards and Technology has published a taxonomy and standard terminology for attacks against machine learning and AI systems.
Web
- Nimbo Earth Online aims to be a “digital twin” of the Earth. It’s superficially similar to Google Earth but has fascinating features like the ability to see historical progressions: for example, how a landscape changed after a fire or how a river’s course wandered over the years.
- A study shows that search results are getting worse as a result of SEO spam. The problem affects all major search engines. If you read the paper and ignore click-bait summaries, Google is doing a somewhat better job of maintaining search integrity than its competitors.
- The Verge has an excellent article about how optimizing sites for Google search have affected web design, making sites much more homogeneous.
- Facebook’s app includes a new Link History setting (on by default) that encourages use of the app’s built-in browser. Link History saves all links, and the browser is known to include a keylogger; the data from both is used for targeted advertising.
Quantum Computing
- While we don’t yet have usable quantum computers, an improvement to Shor’s algorithm for factoring numbers has been published. While it reduces the computational time from O(N^2) to O(N^1.5), it increases the number of qubits required, which may be an important limitation.