For the first time, physicists have observed that 'holes' in light can move faster than the light itself.
They're known as phase singularities or optical vortices, and since the 1970s, scientists have predicted that, just as eddies in a river can move faster than the flowing water around them, so too can whirlpools in a wave of light outrun the light they're embedded within.
This does not break relativity, which states that nothing can travel faster than the speed of light. That's because the vortices carry no mass, energy, or information, and their motion is based on the evolving geometry of the wave pattern rather than any physical motion through space.
However, capturing this phenomenon in action has been difficult to accomplish because it unfolds on extremely small scales of space and time. The achievement is a triumph of electron microscopy.
"Our discovery reveals universal laws of nature shared by all types of waves, from sound waves and fluid flows to complex systems such as superconductors," says Ido Kaminer, physicist at the Technion Israel Institute of Technology.
"This breakthrough provides us with a powerful technological tool: the ability to map the motion of delicate nanoscale phenomena in materials, revealed through a new method (electron interferometry) that enhances image sharpness."
Although to our eyes light appears uniform, it has a lot going on that we cannot easily discern. Light can be subject to disturbances similar to those seen in other systems dominated by flow dynamics, including a type of phase singularity scientists call optical vortices.
Light can behave both as a particle and a wave; an optical vortex forms when the wave twists as it travels, like a corkscrew. At the very center of that twist, the light cancels itself out, leaving a point of zero intensity – a kind of dark "hole" in the light.
It's mathematically understood that two singularities in a reference frame will be drawn together, gaining speed as they approach, reaching velocities that appear to exceed the speed of light in a vacuum.
"As opposite-charged singularities approach each other, their paths in spacetime must form a continuous curve at the annihilation point, forcing their acceleration to unbounded velocities right before the annihilation," the researchers explain in their paper.
It has been observed in other systems, but studying how this scenario might play out in a light field is somewhat trickier. Much work has been done in physics labs to study it, but observations of optical vortices have been limited by the technology's inability to keep up with the speed at which vortex formation, motion, and collision unfold.
To overcome these limitations, Kaminer and his colleagues recorded the behavior of optical vortices in a two-dimensional material called hexagonal boron nitride.
This material supports unusual light waves called phonon polaritons – hybrids of light and atomic vibrations – that move much more slowly than light alone and can be tightly confined. This creates intricate interference patterns filled with many vortices, allowing the researchers to track their motion in detail.
The second, crucial part was capturing those dynamics in real time. The team deployed a specialized high-speed electron microscope with unprecedented spatial and temporal resolution, which recorded events unfolding over just 3 quadrillionths of a second.
They ran the experiment many times, each time recording at a slight delay compared to the previous run. By stacking together the hundreds of images generated this way, the researchers created a timelapse of the vortices as they hurtled towards and annihilated each other, their velocities very briefly reaching superluminal speeds in the process.
The experiment took place in a two-dimensional context. The next step, the researchers say, is to try to extend their work into higher dimensions to observe more complicated behavior. They also say the techniques they developed could help address some of the current limitations of electron microscopy.
"We believe these innovative microscopy techniques will enable the study of hidden processes in physics, chemistry, and biology," Kaminer says, "revealing for the first time how nature behaves in its fastest and most elusive moments."
The research has been published in Nature.
.
Friday, April 3, 2026
Violating the Speed of Light: Infinities beyond the Limit of Causality (Light) inside of the Dirac Sea...?
Thursday, April 2, 2026
Holy Week Questions?: For Maundy Thursday...
Maundy Thursday is called "Maundy" because it stems from the Latin word mandatum, meaning "command" or "mandate". This refers to the new commandment Jesus gave his disciples during the Last Supper to "love one another as I have loved you," shortly after washing their feet to symbolize service and humility.
Key details about the name and day:
- The Commandment: In John 13:34, Jesus says, "A new commandment I give to you, that you love one another; as I have loved you...".
- Latin Roots: This "new commandment" is translated in the Latin Vulgate Bible as Novum Mandatum. Over time, mandatum was anglicized to "maundy".
- Washing of Feet: The day commemorates the Last Supper, where Jesus washed his disciples' feet to model humble service. This act is known as the Mandatum.
- Other Names: It is often called Holy Thursday, or in some traditions, "Sheer Thursday" (clean Thursday).
- Traditions: In the UK, the monarch commemorates this day by distributing special coins known as "Maundy money" to residents.
Wednesday, April 1, 2026
Christianity's 'April' Saturnalia
Saturnalia was an ancient Roman festival (Dec 17–23) honoring Saturn, characterized by a, intense, temporary inversion of social hierarchy that mirrored a mythical "Golden Age" of equality. Slaves were treated as equals, often served by masters, and allowed to wear the pileus (freedom cap) and act freely.
Key Social Inversions:
Role Reversals: Slaves were permitted to eat with masters, speak freely, and were often served by them, effectively flipping the social order.
Lord of Misrule: A household would choose a Saturnalicius princeps (mock king) by throwing dice to issue ridiculous, absolute commands (e.g., "sing," "dance," "don't wear a toga").
Dress Codes: Strict Roman clothing rules were abandoned, allowing slaves to wear the pileus (felt cap) of freedmen and for everyone to wear colorful casual clothes (synthesis) instead of official togas.
Allowed Vice: Gambling, typically restricted in public, was widely permitted, transforming the city into a scene of wild revelry.
Legal/Business Pause: Courts were closed, schools closed, and no business was conducted, focusing entirely on dining, drinking, and gift-giving.
Purpose:
This inversion served as a safety valve for society, allowing for "December liberty" to alleviate tensions from rigid social class constraints before reverting to normal, hierarchical life.
Monday, March 30, 2026
Hans-Georg Moeller: AI - The Resurrection of the Idols and "Dhimmi-nution" of the Self?
Timestamps:
Eric Weinstein's "Thumb and Forefinger pinching Space gesture" theory revisited...
Sunday, March 29, 2026
Why 'Woke' is Part of the Problem....
Postcolonial theory is an academic framework analyzing the cultural, economic, and political legacy of colonial rule, focusing on how Western powers shaped knowledge and identity in colonized regions. It examines power dynamics, representation, and the enduring impact of imperialism, aiming to decenter Eurocentric narratives.Key Concepts in Postcolonial Theory
- Orientalism: Coined by Edward Said, this describes how the West (Occident) created a stereotyped, "othered" view of the East (Orient) to justify colonial rule.
- Hybridity: Homi Bhabha’s concept of the mixture of colonizer and colonized cultures, creating new, complex cultural forms rather than simple imitation.
- Subaltern: Gayatri Spivak’s term for marginalized groups—the lowest classes—who are denied a voice or representation in history.
- Agency: The capacity of colonized subjects to act independently and resist colonial power structures.
- Eurocentrism: The tendency to view the world primarily through a European lens, treating European culture as superior or universal.
- Othering: Defining the colonized population as fundamentally different from, and inferior to, the European "self".
Main Themes
- Identity and Representation: Analyzing how colonial discourse created negative or exoticized stereotypes, forcing colonized peoples to adopt "hybrid" identities.
- Power and Knowledge: Challenging the idea that knowledge is neutral, arguing that Western academic, literary, and artistic traditions were used to justify imperialism.
- Resistance: Studying the struggles for independence and the ways indigenous knowledge survived and fought back.
- Neocolonialism: Examining how economic and political dependency persists after formal independence, often through organizations like the IMF or global trade.
Effects on Politics and IdentityPostcolonial theory argues that political independence did not erase structural injustices. It affects national identity by forcing postcolonial nations to navigate between indigenous traditions and the lingering influence of colonial education, language, and legal systems. It also highlights how "postcolonial melancholia" can affect the former colonizer, leading to a nostalgic, often racist, representation of their imperial past.Main Criticisms of Postcolonial Theory
- High Academic Jargon: Critics (and some practitioners) argue that thinkers like Spivak and Bhabha use dense postmodern language that makes the theory inaccessible to the public.
- Lack of Political Engagement: Some argue that, despite its focus on the "subaltern," the academic field is dominated by intellectuals who publish in English, failing to reach the local populations they study.
- Over-focus on Discourse: Conservative critics argue that it unfairly attacks Western civilization and focuses too much on cultural representation rather than material conditions.
Key Theorists
- Edward Said: Author of Orientalism.
- Gayatri Chakravorty Spivak: Famous for "Can the Subaltern Speak?".
- Homi K. Bhabha: Known for concepts of hybridity and mimicry.
- Frantz Fanon: Wrote about the psychological impact of racism and colonialism.
Key Texts
- Orientalism by Edward Said (1978)
- The Wretched of the Earth by Frantz Fanon (1961)
- In Other Worlds: Essays in Cultural Politics by Gayatri Spivak (1987)
- The Location of Culture by Homi Bhabha (1994)

“Postcolonialism is the invention of rich Indian guys who wanted to make a good career in the west by playing on the guilt of white liberals”
- Slavoj Žižek
STOP Trying to De-Colonize the West!
Saturday, March 28, 2026
On 'Woke' and Non-Woke Moral Paradigm Shifts
Michel Foucault’s episteme is the unconscious, foundational set of rules and "grid" of knowledge that defines the limits of thought, truth, and discourse within a specific historical period. It acts as a "historical a priori" that determines what can be known and accepted as true. These epistemic frameworks shift suddenly rather than gradually, altering the structure of knowledge
Friday, March 27, 2026
Does Dark Matter Radiate "Space" much as Normal Matter Radiates "Time" (when Travelling Slower than the Speed of Light/ Causality)?
No, time is not a form of radiation. While time and radiation interact within the framework of physics (such as using radiation cycles to measure time), time is generally understood as a fundamental dimension, a measurable dimension of spacetime, or a conceptual framework for ordering events, rather than energy propagating through space.Key points to understand the distinction:What is Radiation? Radiation is energy—either particles or electromagnetic waves—that travels through space, such as light, heat, or X-rays.What is Time? Time is a fundamental physical quantity used to measure the duration and sequence of events, and is a dimension of the space-time continuum.Measurement Role: Atomic clocks use the, “9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom” to define a second. In this case, radiation measures time, but is not time itself.Fundamental Difference: Some theories suggest time may be an emergent property or an illusion, whereas radiation is a tangible energy output within that framework.

Technofeudalists Conquer the National Science Foundation (NSF)?
The NSF Tech Labs initiative is a newly launched program by the U.S. National Science Foundation (NSF)'s Directorate for Technology, Innovation and Partnerships (TIP). It is designed to create a new generation of independent research organizations that focus on solving complex technical bottlenecks that traditional academic or industry labs cannot easily address.Key Features of the Initiative
- Targeted Teams: The program funds full-time, interdisciplinary teams of researchers, scientists, and engineers rather than individual principal investigators or isolated projects.
- Operational Autonomy: Selected teams operate with a high degree of independence from existing academic and industry constraints to pursue breakthroughs at "breakneck speed".
- Significant Funding: NSF anticipates making large, multi-year awards in Fiscal Year 2026, with funding for high-performing teams expected to range from $10 million to $50 million per year.
- Milestone-Based Model: Unlike traditional annual grants, funding is tied to demonstrating technical progress toward commercially viable platforms.
- Phased Implementation:
- Selection Process: A 90-day initial selection period.
- Phase 0: A nine-month planning phase for concept refinement and team building.
- Phase 1: A two-year period for scaling operations and pushing for real-world impact.
- Phase 2: Extended support for high-performing teams to transition innovations to market.
Strategic GoalsThe initiative aims to bridge the "valley of death" between foundational research and commercialization. It draws inspiration from models like Focused Research Organizations (FROs) and the Janelia Research Campus, focusing on platform technologies that can reshape entire sectors like AI, quantum technology, and biotechnology.Companion ProgramNSF TIP also introduced the Tech Accelerators Initiative as a companion effort. While it shares core principles with Tech Labs, it provides wider entry points for teams specifically focused on technology translation in key national priority areas.Would you like more details on the Request for Information (RFI) process or specific eligibility requirements for these awards
---
Michael Gibson, "How to Break America’s Great Scientific Stagnation"President Trump’s selection of Jim O’Neill to head the National Science Foundation could open the next great chapter of discovery.
The biggest factor holding back an American revolution in science is not money but talent identification. For more than half a century, risk-averse bureaucracies and universities have let bold ideas and promising discoveries wither on the vine under the guise of credentialed expertise and the virtues of peer-reviewed incrementalism.
The evidence for this Great Scientific Stagnation is substantial. Research productivity is declining sharply across many domains. Federal R&D spending is more than 30 times what it was in 1956, and more scientists are trained and more papers published than ever. Yet revolutionary breakthroughs are becoming rarer. Many peer-reviewed findings fail to replicate, and a high probability exists that the vast majority of papers (which no one reads) are full of false conclusions. Accusations of fraud in science are on the rise.
America must break through this chokepoint by focusing on its greatest resource: talent. It matters how, and to whom, we award grants. We should be working toward tapping the energy at the heart of the sun and hanging our achievements in the balance of the stars. Instead, federal science funding has often drifted elsewhere: on promoting insects as human food ($2.5 million), watching monkeys gamble ($3.7 million), observing brain-damaged cats walk on treadmills ($549,000), and sending cash to DEI bird watching clubs ($288,563).
Fortunately, American science may soon get a lot more exciting—faster, wilder, and even more rigorous. On March 2, President Trump nominated Silicon Valley financier Jim O’Neill as director of the National Science Foundation. O’Neill served most recently as Deputy Secretary of Health and Human Services and acting director of the Centers for Disease Control and Prevention.
With a budget of nearly $9 billion, the NSF determines which nonmedical scientists receive funding, which university labs are supported, and which frontiers advance. With O’Neill at the helm, the old slow-drip model of incremental, consensus-driven funding would get a much-needed shake-up.
But O’Neill must first clear the Senate confirmation process, and the opposition is already sharpening its knives. California Representative Zoe Lofgren, the leading Democrat on the House Committee on Science, Space, and Technology, told Science that O’Neill isn’t fit for the office. “[G]iven his track record at HHS and CDC under [HHS Secretary Robert F. Kennedy Jr.], Mr. O’Neill seems like a bad choice to lead the National Science Foundation, our nation’s premier scientific agency.”
Science ran a second article featuring scientists skeptical of O’Neill. Neal Lane, a former NSF director under President Bill Clinton and a physicist at Rice University, said, “I think it’s unfair to ask him to do the job”—largely because O’Neill lacks an advanced science degree. Michael Turner, a cosmologist at the University of Chicago, added that he sees O’Neill and the Trump administration’s direction as overly commercial and “shortsighted.”
Full disclosure: I owe O’Neill a great deal personally. In 2010, he introduced me to Peter Thiel, for whom he then worked as head of the Thiel Foundation and a research lead at his hedge fund. On my first day, September 27, 2010, we launched the Thiel Fellowship, which awarded 20 young people a year $100,000 grants—with two notable conditions: applicants had to be under 19 and agree to drop out of college.
Critics torched the Thiel Fellowship from the start. My favorite was Larry Summers, former president of Harvard, who called our program the “single most misdirected philanthropy of the decade.” Today, however, the fellowship is a strong predictor of future billionaires. Its grantees have generated more than $500 billion in value since the program began. Its hit rate—the share of fellows who start tech companies that reach $1 billion in market cap—exceeds that of top accelerators like Y Combinator and institutions like Harvard Business School.
What I learned running the fellowship with O’Neill, Thiel, and others during its first three years is that America has lost sight of what it takes to produce new inventions and discoveries. That failure has led those who run major institutions to misjudge talent. The people running Harvard or the NSF—the architects of the “Great Scientific Stagnation”—think in terms of buying prestige: the number of papers published, the status of the journals in which they appear, citation counts, the reputations of endorsing professors, prior grants, grant size, university affiliation, and Ph.D. pedigree.
If you’re head of the NSF, responsible for managing more than $10 billion in federal spending to advance science, your goal should be to buy discoveries, not prestige. And to buy discoveries, you need to find and fund creative genius, wherever it turns up.
When I see universities like Johns Hopkins or the University of Pennsylvania collecting billions in grants each year, I see evidence of a flawed understanding of where discoveries come from. True, their scientists produce solid work and, at times, important theories, but are they worth the billions the government sends them? If the “Great Scientific Stagnation” thesis is right—and the evidence suggests it is—the answer is no.
Why aren’t Americans getting a better bang for their science buck? Grant-makers often overlook talent because of perceived flaws: age, appearance, personality, among others. The supposed defects of Thiel Fellows were their youth and lack of a college degree, yet those attributes proved irrelevant. Our job was to predict outcomes and take big risks; the results speak for themselves.
When O’Neill got the nod to lead the NSF, I texted to ask how I could help. “Send me your ideas,” he replied. Here are five.
First, the NSF should adopt the Thiel Fellowship model (or develop its own) for identifying young, overlooked, and therefore undervalued talent. The central challenge is improving selection while relying on less evidence. At the Thiel Fellowship, we built models that assessed the raw character of an engineer or scientist and translated it into an estimate of strengths and likely success.
I emphasize “young” because creativity is perishable. As with athletes, there is a prime age range when the mind is most fecund and sharp. One of American science’s key chokepoints is that institutions don’t trust younger researchers to do great work. The average age of first-time grant recipients is about 40 to 45; that should drop by two decades.
The economist Benjamin Jones’s research, spanning the past century, shows that innovators are reaching their greatest achievements at increasingly older ages. That might be fine if creativity and productivity remained constant over a lifetime—but they do not. Late starts mean shorter careers, resulting, by Jones’s estimate, in a 30 percent decline in the potential for new discoveries and inventions. By analogy, imagine how unimpressive career statistics would be if Major League Baseball barred players from competing until age 30. Albert Einstein was 26 when he wrote the four papers that revolutionized physics in 1905. Isaac Newton was 23 or 24 when he developed calculus and the theory of gravity. The NSF should be looking in these age ranges for tomorrow’s talent.
Second, the NSF must speed up the grant-review process. As it stands, it’s a circus. Multiple studies find that scientists spend 20 percent to 40 percent of their working hours preparing applications that have only about a one-in-five chance of success and take far too long to process. Endless peer-review rituals and elaborate decision procedures, neither of which improve outcomes, slow the pace of discovery.
We should accept a higher risk of flubs, blind alleys, and dead ends in exchange for a better shot at major breakthroughs. Two ideas are worth testing. One is a partial lottery: randomly select from applications that clear a quick initial screen. The other would be scout programs: recruit a rotating network of proven agents—working scientists, professors, independent thinkers—with firsthand knowledge of emerging talent. No interviews or prestige proxies. Give credentials near-zero weight. Scouts would simply select recipients, who wouldn’t even know they were being evaluated until the funding arrived. To avoid entrenched patronage, scouts would serve two-year terms and then pass their authority to a peer outside their immediate professional circle.
Third, measure what matters: the expected magnitude of discovery, not the expected number of citations. Create a “renegade scientist” grant program that explicitly rewards risky, interdisciplinary, even unconventional, proposals that peer review tends to kill. Pair it with a clear list of major unsolved problems that the grants are meant to tackle.
Fourth, use the NSF’s funding and authority to break the cartel of prestigious scientific journals. Taxpayers fund the research, only to have access sold back to them at exorbitant prices. The NSF should use its leverage to free that work from paywalled journals by requiring that all government-funded research be publicly accessible.
Fifth, fix the incentives. Cap or eliminate indirect-cost siphons to universities, especially those with endowments in the tens of billions. Fund people, not buildings. Today, if a university scientist wins a grant, the university claws back more than half of it for “indirect costs” or administrative overhead. This rake-off goes not to the actual business of scientific discovery but to salaries for DEI officers or planting flowers in the quad. The NSF should also require an “idiot index”—a comparison between a scientist’s estimate of an experiment’s cost and the university’s. The aim is to drive spending toward tools and lab space, not bureaucracy.
Do these five things, and the NSF will become the fire for American ingenuity instead of being a steward of stagnation. We have the money and the talent. What’s missing is the courage to stop buying prestige and start buying discoveries.
Jim O’Neill has spent his career proving he can spot that courage in others. Now he should get the shot to institutionalize it. The Senate should confirm him so the next chapter of revolutionary science can begin. As he wrote in an X post announcing his nomination, “Entropy is on the march and China is not waiting.”
Wednesday, March 25, 2026
Did the Holocaust Irreparably Damage Western Civ's 1st Principle of Universal Humanism?
The 1st Principle of Unitarian Universalism is the affirmation and promotion of "the inherent worth and dignity of every person". This guiding principle highlights the foundational belief in the inherent value of all individuals, setting the stage for inclusivity, justice, and compassion in all human interactions.
Key Aspects of the 1st Principle
In contrast, within the context of Christian Universalism, the fundamental doctrine is that all human beings—and potentially all of creation—will ultimately be saved and reconciled with God.
- Universal Value: It asserts that every individual possesses worth, regardless of background, faith, or actions.
- Grounding: This principle is rooted in humanist teachings and the shared belief in the dignity of all human beings.
- Core of the Principles: It is the first of seven principles adopted by the Unitarian Universalist Association, which emphasize justice, equity, and compassion in relationships.
Tuesday, March 24, 2026
Aneural Learning
Aneural learning refers to the ability of organisms or systems that lack a nervous system (neurons/brain)—such as single-celled organisms, plants, and bacteria—to exhibit behavioral plasticity, memory, and cognitive-like processes. This field challenges the traditional view that learning is solely a function of nervous systems, suggesting that cognitive processes may have predated the evolution of neurons.
Key Aspects of Aneural Learning:
Examples in Nature:Mechanisms: Aneural learning is often supported by molecular networks within cells that process and store information, acting as "wetware". These systems can exhibit habituation (reducing response to a familiar, harmless stimulus) and sensitization (increasing response to a harmful stimulus).
- Single-celled Organisms: Physarum polycephalum (slime mold) can be trained to associate time with a cold shock or respond to stimuli as a sign of food. Ciliates like Stentor roeselii demonstrate complex decision-making and avoidance behaviors when exposed to harmful stimuli.
- Plants: Pea plants have been conditioned to associate air movement with light, demonstrating associative learning without a brain.
- Immune System: Immune cells can show learning-like behaviors such as generalization, based on molecular mimicry.
Significance: Studying aneural learning helps researchers understand the basic components of behavior and decision-making, such as:Research Applications: Insights from aneural systems are being used to develop new computational models, such as "weightless" networks or non-connectionist neural networks.
- Perception and Memory: Storing information about environmental stimuli.
- Behavioral Plasticity: Changing behavior based on experience.
- "Irrational" Cognition: Some aneural organisms demonstrate creative or "wrong" solutions (irrational learning) that may still offer survival advantages.
Monday, March 23, 2026
Sunday, March 22, 2026
Gatekeeping the 'Woke' Institutional Narrative (Woke GIN)...
Online Life: The Source of Our Hyper-Reality
"You get your information on the facts merely by looking at what others have to say about it."
- Niklas Luhmann, "Introduction to Systems Theory"
from Google AI:
This quote from Niklas Luhmann (often rendered within the context of "second-order observation") describes how modern society handles knowledge and reality, as discussed in his work Introduction to Systems Theory (specifically regarding observations and the "observation of observers").
Context of the Quote
Luhmann argues that modern society has shifted from directly observing reality to "second-order observation"—the observation of observers.
Descriptions of Descriptions: Instead of looking at "facts" directly, we look at reports, news, scientific papers, and opinions about those facts.
Advanced Mode of Perception: This is not seen as a failing, but as the "advanced mode" of perceiving the world in functional domains like academia, economy, art, and politics.
Examples: We know about global warming, economic trends, or current events not by direct observation, but by "looking at what others have to say about it".
Meaning in Systems Theory
Observation of Observers: In a complex, functionally differentiated society, we cannot experience everything. Instead, we observe how others (scientists, journalists, politicians) observe the world.
Structural Complexity: Because function systems (like law, science, or media) are "operatively closed," they only observe their own operations and create their own version of reality.
Reducing Complexity: We "get our information on the facts" by selecting from the massive amount of communication produced by these systems.
This approach highlights that in the modern world, knowledge is essentially social, mediated, and constructed, rather than a direct perception of objective reality.
"Where is the Life we have lost in living? Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?"T. S. Eliot
Friday, March 20, 2026
Flaneurial Fascinations...
Thursday, March 19, 2026
Wednesday, March 18, 2026
New Particle Re-Combinatorics... A Heavy (x4) Proton?
Snappily named Xi-cc-plus, Cern physicists spotted the particle in shower of debris that lit up Large Hadron Collider
Scientists at the Cern nuclear physics laboratory near Geneva have discovered a heavier version of the proton, the subatomic particle that sits at the heart of every known atom in the universe.
They spotted the particle in a shower of debris that lit up a detector at the Large Hadron Collider (LHC), located deep beneath the ground at Cern, which smashes protons together at close to the speed of light. The collisions recreate in microcosm conditions that prevailed just after the big bang, with the energy converting to particles that spray in all directions.
The newfound particle, which is four times heavier than the regular proton, should help physicists refine their understanding of the strong nuclear force that glues together the innards of all atomic nuclei. The force is unusual because it behaves like a rubber band, getting stronger as the distance between subatomic particles increases.
Physicists working on the LHCb experiment found the heavy proton after the detector was upgraded to make it more powerful.
“This is just the first of many expected insights that can be gained with the new LHCb detector,” said Prof Tim Gershon at the University of Warwick, who takes over as the LHCb international lead in July. “The improved detection capability allowed us to find the particle after only one year, while we could not see it in a decade of data collected with the original LHCb.”
Atoms of hydrogen, the simplest and most abundant element in the observable universe, contain only a proton and an electron. Protons, along with neutrons in heavier atoms, consist of elementary subatomic particles called quarks. A proton contains two up quarks and one down quark, but there are heavier, unstable versions of quarks known as charm, strange, top and bottom.
In the heavy proton detected at Cern, both up quarks are replaced with charm quarks. The particle, snappily named Xi-cc-plus, was revealed by its signature decay into other particles. After popping into existence, it does not hang around: scientists suspect it survives for less than a millionth of a millionth of a second before breaking down.
“The more we learn about these particles, the more we can learn about the strong force, and that is the same strong force that binds our protons and neutrons together,” said Prof Chris Parkes, a physicist at the University of Manchester.
The discovery comes as UK Research and Innovation(UKRI), the nation’s science funder, faces fierce criticism for its plans to pull £50m funding for the LHCb’s final upgrade in the 2030s. The revamp would ensure the detector made the most of a major transformation to the LHC that could substantially improve its discovery potential.
UK scientists working in particle physics, astronomy and nuclear physics have been told their grants will be slashed following cost overruns at major science facilities. Projects have also been hit, including the next LHCb upgrade and an electron-ion collider under development with researchers in the US.
Last week, Chi Onwurah, chair of the Commons science committee, sent a scathing letter to Prof Ian Chapman, chief executive of the UKRI, and Patrick Vallance, the science minister, calling the cuts “wholly unacceptable” and “a failure” by UKRI, the Science and Technology Facilities Council and the Department for Science, Innovation and Technology.
The letter demands “swift and decisive action” and asks whether the decision on the LHCb upgrade is final.
“It is so important that we can overcome the problems caused by the UKRI decision to deprioritise the funding for this project,” Gershon said. “No other experiment either running or planned will be able to do this physics.”
Exploring the Future of Quantum Technologies
4:43 What causes our very first heartbeat?6:36 Noble’s 1958 research on the first heart model8:40 On self-excitation in cells (and what “self” means)9:24 The central dogma in biology11:17 Schrödinger’s view of life as a crystal13:43 To what degree DNA replicates like a crystal15:16 The amazing error correction in our genome16:59 How enzymes know when they encounter an error19:19 “Genes look like a code of life…”22:05 The merits and limitations of the Human Genome Project23:39 Can we really say “the cell wants” something?24:51 Understanding the scales and extraordinary mechanisms in a cell27:18 What we do and don't understand29:16 On Michael Levin’s work31:23 On cancer
35:41 Neo-Darwinism vs true Darwinism38:19 Something must have sped evolution up41:22 The cell controls the genome44:19 On the metaphysics of chemistry leading to life46:42 Biological relativity51:08 The universe as a self-excited circuit52:18 On Richard Dawkins54:27 On the difference between causation and association56:48 The limitations on the predictive power of genomics58:46 The false hopes around the Human Genome Project1:00:20 The central dogma in biology has the wrong metaphysics1:07:03 Noble on Spinoza1:11:08 How dualistic thinking still limits us1:13:40 On the nature of the self1:17:06 How life lives on the boundary between order and chaos1:18:32 How errors become solutions1:19:51 A love story between a human and an AI1:23:58 On quantum biology1:26:27 On the importance of humility in science1:28:16 How we crave meaning (and reductionist science has deprived us of it)1:29:07 Denis Noble singing troubadour poetry1:30:27 Science must lay down its weapons1:32:18 What dancing to the tune of life means on a personal level
Causation and Association/Correlation Multiplicities = Intelligence (@ 52:00-56.48)? Why when one approach fails multiple others react and attempt to compensate for the failed mechanism? Mutliple "agents" applying (at a multiplicity of biological levels) a "use it or lose it" philosophy?
0:00 - Introduction0:44 - Biological intelligence9:17 - Living vs non-living organisms14:30 - Origin of life18:15 - The search for alien life (on Earth)51:19 - Creating life in the lab - Xenobots and Anthrobots1:04:21 - Memories and ideas are living organisms1:18:02 - Reality is an illusion: The brain is an interface to a hidden reality2:03:48 - Unexpected intelligence of sorting algorithms2:29:26 - Can aging be reversed?2:33:17 - Mind uploading2:51:57 - Alien intelligence3:06:52 - Advice for young people
Tuesday, March 17, 2026
Pericalypses: A (Q)want'lem Life in a Perfect Vacuum
Advice to the Grub Street Verse-writers by Jonathan Swift
On Understanding & Proving Feynman Diagrams
The Lamb shift is a small energy difference between the 2 S1/2 and 2 P1/2 states of hydrogen, not predicted by the Dirac equation. It arises from the electron's interaction with virtual photon vacuum fluctuations, forcing a tiny, rapid oscillation of the electron's position. Renormalization of the electron's mass, essential in Quantum Electrodynamics (QED), allows the divergence to be removed, yielding a finite value for the shift. This same vacuum interaction similarly contributes to the electron's anomalous magnetic moment.Key Aspects of the Lamb Shift and QED:Virtual Photons & Vacuum Fluctuations: The Lamb shift is physically understood as the interaction between an atomic electron and virtual photons that are constantly emitted and reabsorbed from the quantum vacuum. These interactions create a "buffeting" effect (a rapid, small-scale random motion) of the electron, often described by a change in its effective Coulomb potential felt by the nucleus.Renormalization: Early calculations of the interaction showed divergent results, which were resolved via renormalization. Hans Bethe calculated the shift in 1947 by subtracting the infinite, unobservable self-energy of a free electron (renormalizing the mass) from the self-energy of the bound electron, resulting in a finite and measurable shift of approximately 1057 MHz, matching the experiment of Willis Lamb.Electron Magnetic Moment: Similar to the Lamb shift, the anomalous magnetic moment (or g-factor anomaly, [(g-2)/2] of the electron arises from QED radiative corrections—primarily the exchange of virtual photons between the electron and itself or with an external magnetic field.The Shift Details: The Lamb shift lifts the degeneracy between states with the same J (total angular momentum) but different I (orbital angular momentum), such as the 2 S1/2 and 2 P1/2 states, where the 2 S1/2 is slightly higher in energy (about 4.35 x 10-6 eV).
Monday, March 16, 2026
Why Gravity Isn't "Travelling" Faster than the Speed of Light (c - Causality)
Retardation cancellation in gravity refers to the relativistic phenomenon where the delay in gravitational interaction (due to gravity traveling at the speed of light) is nearly perfectly cancelled out by velocity-dependent terms in general relativity. This cancellation causes gravitational forces to point toward a body's current, "instantaneous" position rather than its delayed (retarded) position, thus preventing orbital instability that would otherwise occur.Core Concepts of Retardation CancellationThe Problem of Retardation: If gravity travels at the speed of light (c), Earth should technically feel the Sun's gravity from 8 minutes ago (its "retarded" position). If this were the only effect, the resulting torque would cause Earth to spiral into the Sun in months, which does not happen.The Cancellation Mechanism: According to general relativity, the retardation effect is canceled by velocity-dependent terms that act as a form of correction. This is a fundamental difference between gravity and electromagnetism.Why It Matters (Stability): This cancellation ensures that orbits remain stable because the gravitational field appears to act instantly, even though it actually moves at light speed.Role of Velocity and Acceleration: For non-accelerating (or slowly accelerating) objects, the retardation effect on gravity is practically non-existent. However, for systems that radiate energy via gravitational waves, this cancellation is not complete.Alternative Explanations (MOND): Some theories suggest that in very low acceleration environments (like the outskirts of galaxies), these retardation corrections are related to Modified Newtonian Dynamics (MOND) and can account for effects usually attributed to dark matter.Essentially, retardation cancellation ensures that General Relativity works with observation: gravity travels at c, but behaves as if it travels instantaneously, thus maintaining stable orbital motions.
---
Gravitational fields, created by massive objects, exist as 3D structures extending through space, influencing other masses via attraction. These fields, described by general relativity as spacetime curvature, act as a continuous, energetic medium bridging objects rather than merely empty space, forming a structured, invisible "web" that directs motion.Key aspects of this gravitational "structure" include:Three-Dimensional Structure: Gravitational fields are not just lines on paper, but continuous three-dimensional influences filling the space surrounding and between massive objects.Structure of Space-Time: General relativity describes these fields as the actual curvature or warping of space-time, which can be perceived as a, sometimes, rigid or fixed structure.Energy Density: These fields, especially in the context of GR, are not "empty" space but contain non-zero, measurable energy that determines the motion of objects within it.Field Lines as Visual Aid: While not physical wires, field lines are used to visualize the direction and intensity of this structure, pointing towards the center of massive bodies.
Force Propagation: The field acts as an invisible force field (or "cosmic field"), allowing objects to influence each other without touching.
A gravitational field acts as an invisible, static structure that surrounds any massive object, with its intensity and direction determining how other objects move near it. While classical mechanics often treats this as a force field pointing towards the center of mass, modern physics describes this "structure" as the curvature of spacetime itself.Key aspects of this gravitational structure include:Static Nature: The field is "static" because it does not move relative to the object creating it (e.g., Earth's gravitational field stays with the Earth)."Rigid" Characteristics: The field is often depicted as a "potential energy landscape" or "well" that stays fixed around the massive object.Structure of Space: Einstein's general relativity explains that this field is not a rigid substance holding objects, but rather a warping of space-time geometry, where objects follow straight-line paths (geodesics) that appear curved because the space itself is curved.Attractive Force: Within this field, any mass (or energy) experiences a force pulling it toward the source of the field, with strength inversely proportional to the square of the distance.Analogy as a "Rigid Structure"This idea is commonly visualized as a "frictionless ski hill" or a "funnel" in space around a massive object, which directs the motion of nearby objects. For spherical objects like planets, this structure is generally uniform and acts like a rigid "nesting set of shells" extending through space.
The de Sitter effect (or geodetic precession) is a general relativity prediction where an orbiting gyroscope's axis precesses due to the curvature of spacetime caused by a central mass, such as Earth. Confirmed by Gravity Probe B at 6.6 arcseconds per year, it highlights how spacetime geometry dictates motion.Key Aspects of the De Sitter Effect:Origin: Predicted by Willem de Sitter in 1916 to correct Earth-Moon orbital motion, it describes how a spinning object in a curved space does not return to its initial orientation after a full orbit.Gravity Probe B (2004): This mission verified the effect by tracking gyroscopes in Earth's orbit, confirming the precise misalignment caused by the Earth bending spacetime.Mechanism: It is purely a consequence of spacetime curvature (geodetic effect) and is distinct from frame-dragging (Lense-Thirring effect), which is caused by the rotation of the mass itself.Modern Context: Often discussed alongside "de Sitter space"—an expanding, empty universe with a constant positive cosmological constant used to model early universe inflation.Difference from "De Sitter Space" and "Gravity"
While the de Sitter effect refers to this specific gyroscopic precession, de Sitter gravity often refers to a theoretical framework (or a form of "toy model" in 1 + 1 dimensions) where gravity acts within a de Sitter universe, often related to cosmological studies of dark energy.
---
The statement above is ✅ True, but it requires a specific physical understanding of how gravity "moves."In the framework of General Relativity, a gravitational field is not a separate entity that exists on top of space; rather, it is the curvature of spacetime itself [14, 31, 32].Explanation
- Spacetime Curvature: Mass and energy tell spacetime how to curve [8, 32]. As an object moves through space, the source of that curvature moves with it, effectively "dragging" the distortion of spacetime along its path [21, 24].
- Speed of Gravity: Changes in a gravitational field (such as those caused by a moving mass) do not happen instantaneously throughout the universe. They propagate at the speed of light (
) [37].- Retarded Potential: Because gravity travels at a finite speed, an observer at a distance experiences the gravitational field of a moving object as it was at a slightly earlier time (the "light-travel time" between the object and the observer) [37].
- Gravomagnetism (Frame-Dragging): For massive objects that are rotating or moving at high speeds, they don't just "drag" their static field; they actually "twist" the surrounding spacetime, a phenomenon known as frame-dragging or the Lense-Thirring effect [21].
Key DistinctionWhile the field "travels" with the object, it is more accurate to say that the source of the curvature is moving, and the surrounding spacetime continuously re-adjusts its shape at the speed of light to reflect the object's new position [37].





