[syndicated profile] balioc_tumblr_feed

For Reasons, I’ve been reading an awful lot of xianxia/cultivation webnovels this year.

Mostly I’ve been reading three serials in particular (which qualifies as “a lot,” given how insanely long these things get). All of them are, I gather, popular and generally considered to be high-quality exemplars of the genre; at least, they were recommended to me on those terms, by knowledgeable parties whose taste I trust.

They’re pretty different from one another in terms of tone and focus, especially given how famously formulaic and trope-y the genre is as a whole.

Which makes it kind of remarkable how conspicuously sexually conservative they all are, as texts. It is a unifying feature that I definitely didn’t expect.

One of them features a protagonist who is deeply sex-averse for trauma-related reasons, in a way that has caused her to turn away from anything that smacks of romance or intimacy, and it has taken her eight (hefty) volumes to get to the point of even thinking “maybe I should do something about that.” One of them features a protagonist who is asexual for reasons of physical malfunction until he levels up enough to fix his body, at which point he transitions seamlessly to an unshakeably-one-target sexuality directed at his beloved life partner. One of them features an isekaied protagonist who quickly gets married to the first woman he meets in the new world (almost literally), and who is as faithful and loyal to her as a golden retriever; he’s the POV, so we know what he’s thinking, and he never experiences the slightest attraction to anyone else. (His wife is, admittedly, kind of a pervert, but just in a silly way that lets her make #relatable dirty jokes, not in a way that would actually result in anyone ever doing anything.)

…and it’s not like it’s weird to have a happy stable monogamous couple, or a protagonist who isn’t interested in sex or romance. It’s not even very weird to have three instances of those things. The weirdness, such as it is, comes from the awareness - discernibly shared by all three texts - that this is a literary choice, and it is not the default choice. Which is obviously true, because xianxia/cultivation is a genre full of harems and sexiness. All three texts comment explicitly on how the protagonist isn’t pursuing a life of boundless sexual opportunity, and there’s an implicit sense in all of them that you should find this fact noteworthy and meaningful.

The reasons for this, I’m sure, are varied and complicated. In at least one case, the novel is a cleaned-up version of an audience-participation “quest” text (like Andrew Hussie’s early work), and so the protagonist’s choices reflect nothing so much as the unpredictable decisions made by a group of forum denizens.

But I can’t shake the feeling that this is being used by all of them, at least to some extent, as a quality marker. “Look, I know that I’m writing in a genre full of self-indulgent wish-fulfillment trash, but I promise that this isn’t trash! See? No harems anywhere! No chance of that at all!”

And, having seen it in the xianxia webnovels, I find myself seeing a very similar dynamic in a lot of other places…

[syndicated profile] multiheaded_feed

The obvious modest proposal for an effective altruist foreign policy platform is for EAs to always support any genocide anywhere in the world. Genocide targets are preselected for being marginalized and institutionally excluded yet typically attract some partisan sympathy, hence have a poor ratio of tractability to exposure, and there are likely longstanding factors behind their marginalization and exclusion. Meanwhile, genocide perpetrators are strongly motivated actors with established governance and security capabilities who look to obtain legitimation and international backing, and are not to be alienated. There is every reason to take advantage and cooperate with them on major neglected causes such as securing the lightcone.

[syndicated profile] crooked_timber_feed

Posted by Hannah Forsyth

Last week Australia’s central bank (Reserve Bank of Australia, RBA) raised interest rates. Again.

Political economists have been talking for decades about the RBA’s tendency to redistribute wealth from the bottom upwards. But now it seems most people understand that the latest interest rate rises requires ordinary people to hand over more of their cash to their bank, to get it out of circulation and bring down inflation.

Asking whether superannuation or taxes could also be used for the purpose of reducing interest rates, the ABC pointed out that interest rates were not always the way inflation was managed. They published an article asking ‘Would you rather hand over an extra $300 a month to your bank or the federal government?’ – suggesting that this might even be an option.

Rightly, the ABC points to the place of government in setting up this structure. But history shows that for all that government is nominally in charge. Well. You might have noticed that banks are fairly powerful. Government v bank doesn’t always mean the government wins…as we will see.

Battle of the Banks

I recently published a review of Bob Crawshaw’s Battle of the Banks, which is about the role of the media in what nearly every historian agrees was a controversial (sometimes seen as just plain mad) decision on the part of 1940s Labor Prime Minister Ben Chifley to try to nationalise Australia’s banking sector.

We have a number of accounts of this fairly notorious episode in Australian history. This one might be the most rollicking. Here is my review, though you probably need institutional access to the Journal of Australian Studies to read it. Yell out if you can’t and I can send you a pre-published version.

The basic story of the battle of the banks is this:

  • The Curtin/Chifley governments had been able to use the banking system( especially the ‘People’s Bank’, the Commonwealth Bank of Australia, which was owned and operated by the government as a central, merchant and trading bank) to help finance Australia’s participation in the Second World War.
  • They now sought to use similar measures to enable them to finance Post-War Reconstruction, which among other things included a very substantial housing program, which they said would fulfil all the dreams of ‘Mrs Australia’.
  • To do this, there was a new banking Act. Led by what is now NAB, the commercial banks challenged the Act in the High Court. Based on a bit of the constitution about money moving across state borders as a foundational goal of federation, one of the provisions of the Act (requiring local government to bank with the Commonwealth Bank so that the flow of cash would help finance housing) was deemed unconstitutional.
  • Evidently pissed off, Chifley called a Cabinet meeting where it was agreed that since this Act was bust, they would nationalise the banks.

At this point nearly every historian (including Crawshaw) declares this to be ‘rash’, as if Chifley just thought it up out of pique and somehow bulldozed cabinet into this crazy plan.

But in fact bank nationalisation has been Labor policy for several decades.

Populist Money Movements

In the late nineteenth and early twentieth centuries, the labour movement developed a serious skepticism about the banking sector.

Historian Peter Love wrote an excellent book a while back now about populist opposition to ‘the ‘money power’, which grew as banking became more influential in the development of Australian capitalism.

Peter Love shows they way this movement helped cohere working class activism in the face of multiple crises, especially the bank crashes of the 1890s and the 1930s Great Depression.

In the 1920s, opposition to the ‘money power’ also coalesced into a politics attached to Douglas Credit. This was a (kinda wacky, in retrospect) idea that a new kind of money could be distributed as as a kind of ticketing system. This would guarantee consumer demand on one hand, and redistribute national wealth on the other, rather than allowing historical power blocs to accumulate more, while others have insufficient money to purchase what they need. It is a precursor, in some ways, to both MMT and a universal basic income.

In the 1920s when ideas and practices of banking, money, economics and politics were still a little more up for grabs than they now seem, the labour movement’s anxiety about the money power helped give Douglas credit political potency. The political party linked to the idea made some progress in the 1930s.

During and after the Great Depression, the idea that we could fix things by issuing currency differently took such hold that it grew into a key reason (on the surface at least) for a Royal Commission into the Monetary and Banking Systems in Australia, commencing in 1935 and reporting in 1937. Reading the report and the submissions from banks, one gets the impression that Social Credit was the public reason for the Royal Commission. Underneath it – at least to my (fairly cursory…SO FAR) reading – was a desire to consolidate data about banking to see what sort of regulation and coordination the sector needed in the wake of the Great Depression.

I wrote about this recently for Griffith Review.

Banks are a utility

A decade or so before he was in government, Labor politician Ben Chifley was one of the commissioners on the Royal Commission into the Monetary and Banking Systems.

The final report of the Commission (1937) includes a dissenting report by Chifley. In it, he describes the way that banking has become more important in the past half-century or so.

Emerging in modern form as a partner of the state, helping facilitate fiscal policy, in other respects banking was a marginal industry on the edge of international shipping. It was crucial to that, though, providing the money needed to ship (say) you wool clip to England to meet a contract. In return for this service, banks took a cut, known as the ‘discount rate’. This was core business to such a degree that some 19th century banks didn’t even accept deposits. That wasn’t what they were there for.

Beginning with the 1851 gold rush (I think), this began to change in Australia. Becoming buyers and sellers of gold set them up as deposit-holders because a deposit was the better way to pay for gold.

And slowly, slowly – too slowly for some farmers and small business owners – they also became providers of business credit.

So in 1937, Ben Chifley looked at this system and saw that nothing could happen in the economy without the banks. It was a utility. In my Griffith Review piece I likened banking-as-utility to sewage. It is essential, but also full of shit.

That time Australia nearly nationalised all the banks

As a utility, Chifley thought that (a) nationalisation was best, but in the absence of that, what with how all the other commissioners were more conservative and were never going to back nationalisation, (b) banking profit rates should be seriously limited. Chifley had some specific suggestions, but the commissioners did in fact agree that the government could consider limiting bank profits.

For Chifley limiting profits would ensure government had the cash it needed to do stuff and/or money was circulating in the economy where it belonged (a key factor during the Great Depression to be sure), rather than flowing relentlessly into the coffers of the banks’ rich shareholders, redistributing national incomes straight into the pockets of the ‘money power’.

We should briefly note that the situation Chifley saw has only intensified. Since bank deregulation, home loans are the big asset on banks’ balance sheets. These are created from nothing (kind of), secured against the ever-rising value of real estate. They are like a vacuum, created to hoover up wages.

So Chifley’s attempt to nationalise the banks in the 1940s was not such a mad plan as it seems in retrospect. It not only reflected longstanding Labor policy, but it also embodied Chifley’s 1937 observation that banking was the sewage system of the economy: public (economic) health depends on its effectiveness, and a focus on very high profits was likely to fuck up its very purpose.

From the people’s bank to the bankers’ bank

The commercial banks success at overturning the postwar reconstruction banking Act they didn’t like emboldened them further. Bank nationalisation was of course a much bigger step and Crawshaw shows that (as well as secretly funding Robert Menzies’ campaign) they went after Chifley using every propaganda tool they could muster, making it a good case study for Crawshaw’s media-savvy eye.

Spoiler alert: Chifley failed. The proposal was that banks would be compulsorily acquired at the commercial rate independently assessed and where every bank worker would keep their job at the current pay rate or better. But in the anti-communist moment, the banks were able to leverage wider dissatisfaction with Chifley to ensure he would not be elected and that their fella, conservative visionary Robert Menzies, would be.

Chifley’s opportunity was gone. And the banks now felt themselves to be unstoppable.

While they were on a roll they decided to go after the Commonwealth Bank, known then as ‘the people’s bank’.

The commercial banks really, really didn’t like that this central bank also competed with them as a trading bank. Just like Rupert Murdoch doesn’t like the existence of the government-funded national broadcaster, the ABC, they felt that the Commonwealth Bank had an unfair commercial advantage.

So, the pressure mounted until the central bank and the trading bank roles were separated. The Reserve Bank of Australia was established as the central bank in 19t60, separating out this goal from the Commonwealth Bank.

Whatever else they may be, we would hardly describe either the Commonwealth Bank or RBA as a ‘people’s bank’ any more. And the power of the banks, not to mention their incredible annual profits, has certainly not lessened – even after another, much more scathing, Royal Commission in 2017-18.

[syndicated profile] scottaaronson_feed

Posted by Scott

WHOA … I’ve won the inaugural Luca Trevisan Award for Expository Work in Theoretical Computer Science! This has a particular meaning for me as someone who knew Luca Trevisan as well as I did for 25 years — who had him as a professor and thesis committee member, whose blog bounced off his blog, who benefitted tremendously from his expository work in TCS — before Luca tragically succumbed to cancer two years ago.

As I told the committee, receiving this award makes me want to use my blog for more actual CS theory exposition, and less venting, in order to retrospectively become worthier of such an honor.

I’m ridiculously grateful to the entire TCS community — my people, my homies — for tolerating me to do what I do.

If you’re curious, here’s the official citation:

The inaugural Luca Trevisan Award for Expository Work in Theoretical Computer Science is awarded to Scott Aaronson for his sustained and high-impact inspirational efforts to explain and promote our field to broad audiences. His blog Shtetl-Optimized has hosted remarkably frequent and elaborate posts over more than two decades, and has become a central meeting place for wide-ranging conversations across the TCS and Physics communities. Scott Aaronson’s blog posts contain crystal-clear, informative expositions of exciting new results, calibrated evaluations of technological claims, and profound analyses of topics in these fields and (way) beyond. The uniquely enthusiastic and witty style of his writings (including his book Quantum Computing Since Democritus and his other lecture notes and surveys), lectures, and interviews have made him a top invitee for both popular and professional appearances, attracting large audiences. These qualities have inspired many students to enter the field, and made Scott Aaronson a go-to person for journalists and scientists alike looking for a definitive word on the latest scientific activity in TCS and quantum computing.


In the rest of this post, I’m going to start practicing what I preached—y’know, about turning over more of this blog to actual exposition, of a kind that the Trevisan Award could plausibly be meant to encourage. I’ll start with something that’s been on my back burner for the past couple months: namely, the (lightly edited) transcript of a talk that I gave this spring at UT Austin’s undergraduate math club. So, without further ado, and in memory of Luca…


On the Decimal Digits of Powers of 2
by Scott Aaronson

Hi! I’ve given six previous talks here at UT’s math club, some on relatively “important” topics—Gödel’s Theorem, time travel, Huang’s proof of the Sensitivity Conjecture, and so on.

Today, I want to talk about an unimportant question, one that my son Daniel, who was then 8, and who’s sitting here in the front row (along with his sister Lily), asked me a few months ago.

Daniel asked: which powers of 2 can you double without needing to carry digits? Clearly 1, 2, 4, 32, and 1024 all have this property, their doubles being 2, 4, 8, 64, and 2048 respectively. Are there any others?

Since I happen to have the powers of 2 up to 220 = 1,048,576 committed to memory since childhood, I confirmed that there were no other examples up to there: 128, 256, 512, 2048, etc. all require carries. So I told Daniel: I can’t find any other example, and on that basis, I conjecture that there aren’t any more. But if that conjecture is true, I don’t know if it will ever be proven, by humans or even AI!

Then I googled it, and saw that this is a known question (not very well known, but there’s a StackExchange post about it). And indeed it had been checked up to 2millions, and no other counterexample had been found.


Why did I become confident so quickly that yes, 1024 is probably the last example of a power of 2 that can be doubled without carrying?

Because of the heuristic that the decimal digits of 2n are more or less “random,” apart from various constraints that are irrelevant here (like that the last digit always cycles among 2, 4, 6, and 8). And 2n has about n/log210 decimal digits. Since only 0, 1, 2, 3, 4 can be doubled without carrying, the probability of 2n being a counterexample should therefore be about $$ (\frac{1}{2} )^{n / \log_2 10}. $$

So, if we’ve already checked up to (say) n=1000, then the probability of a larger counterexample should be at most

$$ \sum_{n=1001}^{\infty} (\frac{1}{2} )^{n / \log_2 10} $$

which, when we sum the geometric series, is exceedingly close to 0.


Ah, but why did I say that I don’t know if the conjecture will ever be proven? Because it seems to belong a large class of similar statements, none of which mathematicians have had any idea how to prove.

Variant of a conjecture by Jeffrey Shallit: 65,536 is the only power of 2 that has no power of 2 among its decimal digits.

Freeman Dyson’s conjecture (2005): There’s no power of 2 for which, when you reverse the decimal digits, you get a power of 5.

Paul Erdös’s conjecture (1979): For every n≥9, there’s at least one ‘2’ in the base-3 representation of 2n.

Or looking even more broadly:

Conjecture: The decimal expansion of π is not all 6’s and 7’s after some finite point.

(This would follow from the stronger conjecture that π is base-10 normal—that is, that every finite pattern of decimal digits occurs in it with the limiting frequency you’d expect.)

Or:

Conjecture: π+e is an irrational number.

What all the above conjectures have in common, and what I find so fascinating about them, is that they seem hopeless to prove for exactly the same reason why they seem almost certainly true. Namely, they all seem to be true “merely” because it would be too insane of a coincidence were they false!

The trouble is, that’s not the sort of reason that seems amenable to turning into a proof. Fermat’s Last Theorem is an interesting exception that proves the rule here. That xn+yn=zn has no nontrivial integer solution for n≥3, did seem almost certainly true on statistical grounds for n≥4 (and for the n=3 case, a proof goes back to Euler). And of course, FLT was ultimately provable. But Wiles’s eventual proof exploited a lucky connection between the Fermat equation and deep, fancy things like modular forms and elliptic curves. At no point did the proof formalize the statistical argument that a 12-year-old could understand, for why the theorem is “almost certainly true.” It simply had nothing to do with the statistical argument.

So then, if you wanted to prove conjectures like my son Daniel’s, or like Shallit’s or Dyson’s or Erdös’s above, the question would be: could these “recreational” problems about base-10 representations ever be connected to anything similarly deep? Right now, it’s very hard to see how they could.

Still, all hope is not lost! Here’s a striking theorem that I learned about when I researched this:

Theorem (James Maynard, 2016): For every digit a from 0 to 9, there are infinitely many primes with no a’s in their base-10 representation.

The proof uses heavy Fourier-analytic techniques. Likewise, presumably there are infinitely many primes that you can double without carrying—2, 3, 11, 13, 23, 31, …—because the primes are much denser than the powers of 2! And presumably there are infinitely many primes that are missing any two decimal digits, or even whose decimal digits consist entirely of 0’s and 1’s. But Maynard’s techniques are not yet powerful enough to prove such things.


Even though I promised a topic of no importance today, I can’t resist pointing out a potential connection here to one of the biggest questions on earth right now, and something that’s professionally interested me for the past four years: namely, the question of how to align powerful AIs with human values and prevent them from destroying the world.

Paul Christiano, and the Alignment Research Center in Berkeley that he founded, have developed a whole program for how to make AI safe that depends on the possibility of formalizing “heuristic arguments”—that is, the kinds of arguments that convince us that the above conjectures are all almost certainly true, even without proofs of them.

The intuition here is that we’ll never have a rigorous proof that, for example, a real-world neural network will behave safely under all circumstances—it’s just too complicated. The best we can hope for is an argument that, e.g., “for this model to scheme against humans would require a crazy unexplained coincidence in its weights.” But how can we hope to formalize such arguments? As baby test cases, can we at least formalize our intuitions for why π is normal, or why Daniel’s conjecture is true, in some principled way?

ARC has tried: there’s a 2022 paper by Christiano, Neyman, and Xu on “Formalizing the presumption of independence.” But it’s tricky, and ARC itself would be the first to agree that the existing results are unsatisfying. How do we even formalize the intuition that, for example, you should be willing to bet at even odds against the 10100th digit of π being a 5?


In the rest of this talk, I’d like to circle back to Daniel’s original question about powers of 2, and show you some things that can be proved about it—with thanks to Greg Kuperberg and my other friends on Facebook, and in some cases to GPT5Pro.

Let’s start with the following easier question. Is there a power of 2 whose decimal digits start with 31415? Or with the complete works of Shakespeare, encoded by letter values in some suitable way? Or with a googolplex digits all of which can be doubled without carrying (as Daniel wanted)?

I claim that the answers are yes, yes, and yes! How do we prove this?

The key fact we’ll use is simply that log102 is an irrational number. (If you don’t remember the proof: suppose log102 = a/b. Then 10a/b = 2, so 10a=2b. But this has no integer solutions other than a=b=0.)

Suppose we want k as a prefix, where 10d-1 ≤ k ≤ 10d. Then we want integers n,r such that

$$ k 10^r \le 2^n \le (k+1) 10^r, $$

i.e. taking the base-10 log,

$$ \log_{10}k + r \le n \log_{10}2 \le \log_{10}(k+1) + r. $$

In other words, the fractional part of n log102 needs to lie between the fractional part of log10(k), and the fractional part of log10(k+1) (where again, k is given).

But now we can appeal to the following Key Fact: if α is any irrational number, then the set

{the fractional part of nα : n∈N}

is dense in the interval [0,1]. Or equivalently, if I rotate around and around the unit circle by 2απ radians each time, then if α is irrational, I’ll eventually get arbitrarily close to any given point on the unit circle.

Why is the key fact true? Just the pigeonhole principle! Clearly, for any ε>0, the fact that α is irrational means that there must be two points, xα and yα, whose fractional parts are distinct yet closer together than ε. But then, by adding multiples of (x-y)α, we can get our fractional part to be ε-close to anything in [0,1].

And to sum up, this is why there must be a power of 2 whose decimal representation starts with the complete works of Shakespeare, or with a googolplex carry-free digits! (Indeed, from the above discussion, we could even extract an efficient algorithm for constructing that power of 2.)


So much for the first decimal digits of 2n. Now let’s look at 2n‘s last decimal digits!

Here there are some complications, arising from the twin facts that

(a) 10 is composite, and
(b) 2 is one of its factors.

But we can deal with those complications!

For starters: what are the possible last decimal digits of 2n?

1, 2, 4, 8, 6, 2, 4, 8, 6, …

So, there’s an initial 1, but then we cycle forever through the even nonzero digits.

What about the last two digits of 2n? If you’ve never tried this before, it’s instructive to work it out:

01, 02, 04, 08, 16, 32, 64, 28, 56, 12, 24, 48, 96, 92, 84, 68, 36, 72, 44, 88, 76, 52, 04, …

So, there’s an initial 01 and 02, but after that, we cycle forever through 20=4×5 possibilities, namely all the possible multiples of 4 whose last digits are nonzero.

You can check that the general pattern is: the last k decimal digits of 2n have an initial segment that looks like 001, 002, 004, 008, …, 2k-1. And then there’s an eternal cycle of length 4×5k-1, where the last digit can be any of 2,4,6,8, while every other digit can either be any possible even digit or any possible odd digit, depending on the digits to its right—in (if you want to say this a fancier way) a recursively defined embedding of the powers of 2 mod 5k into the cyclic group Z/10k. So, there’s an initial “runup” as you fill out all the needed powers of 2, but then once that’s done, you just cycle around forever in an embedding of Z/5k into Z/10k, because

(a) 2 happens to be a primitive element of Z/5k for any k, and
(b) 5 divides 10.

So in particular, and relevant to Daniel’s conjecture: there exists a power of 2 whose last googolplex digits can all be doubled without carrying. Why? For the last digit, you can pick 2 or 4. Then, for the last googolplex digits but one, you can pick 1 or 3 for those constrained to be odd, and 0, 2, or 4 for those constrained to be even. Lots of choices that work!


So, we can avoid carries in the leftmost digits of 2n, we can avoid carries in the rightmost digits … so that “merely” leaves all the digits in the middle, where who the hell knows! Empirically, the digits seem to pass every standard randomness test that you can throw at them. So in particular, e.g., the fraction of the digits that are in {0,1,2,3,4} seems to converge inexorably towards 50%, so that it’s extremely plausible to conjecture that the fraction is less than 49% or more than 51% for only finitely many values of n. But of course, that’s presumably even harder to prove than Daniel’s original conjecture.


OK, last topic. Suppose we want to program a computer to check Daniel’s conjecture up to 2n, for some huge n. What algorithm will do that most efficiently? A naive algorithm would just calculate 1, 2, 4, …, 2n and check all the digits of all of them. That takes O(n2) time, since each 2k, for k=0,1,…,n, has ~klog102 = O(k) decimal digits.

We can improve this to roughly O(n) time, by simply noticing that we only need to check O(1) digits per power of 2 in expectation, presumably, until we find the first digit that requires carrying. Then we don’t even need to compute the remaining digits: we can simply move on to the next power of 2. (This sort of trick is used all over the place in the design of fast algorithms.)

But when I posted about Daniel’s problem on Facebook, my friend Greg Kuperberg (who’s a mathematician at UC Davis) noticed that further improvements are possible. To wit: 8×6 = 48, which again ends in 8. So, 8×16 ends in 8, as does 8×16k for every k≥0. Meaning: no 2n where n=3+4k can possibly be a counterexample to Daniel’s conjecture. They’re all ruled out!

Likewise, 64×1,048,576 ends in 64, so no 2n of the form n=6+20k can be a counterexample. They’re all ruled out as well.

We can keep going this way, filling out the “search tree” of potential counterexamples to Daniel’s conjecture via breadth-first search. At the root of the search tree, we try all possibilities for the last digit. One level deeper, we try all possibilities for the second-to-last digit, and so on. As we go, we prune subtrees according to constraints like the ones above that we keep discovering and adding.

When I worked this out, I got an algorithm for checking Daniel’s conjecture up to 2n, which under reasonable assumptions takes time only O(nα), where α = 1-log52 ≈ 0.569, and space only polylog(n).

Paul Crowley (who’s my Facebook friend) then actually implemented this algorithm, and he tells me that he used it to check that Daniel’s conjecture holds all the way up to $$2^{10^{21}}$$ (!!), using 40 minutes on a 128-core machine.

So, to return at last to the first thing I told Daniel: yes, I think his conjecture is almost certainly true, even though I have no idea when, if ever, the human race or its successors will have a proof.

Twilight:2000 Character Illustrations

May. 10th, 2026 04:23 pm
[syndicated profile] dysonlogos_feed

Posted by Dyson Logos

I’m playing in a great (and deadly) first edition Twilight:2000 campaign and I’ve been drawing the occasional character illustration for it. I’ve been through five or six characters so far.

This is Captain Quade, he’s dead. He got shot in the neck by an AKM.

This fine gentleman below ate a hand grenade.

Sgt Ramirez is my current character, she’s peppered with shrapnel injuries but still on her feet. For now.

Meanwhile, I was shopping around at garage sales last weekend and came across this fireworks box for $5 and snagged it up. It fits right on top of the ancient black wooden box I already have on my stairs here.

And while the dimensions are far from perfect for holding books, it has most of my T2K books in it now. I need to dig around in my basement shelves for the remaining 2nd edition books to toss in here too.

Why don't I see Alexi's posts?

May. 10th, 2026 03:08 am
ilzolende: L10a140 link (Default)
[personal profile] ilzolende

I follow [personal profile] utilitymonstergirl on Dreamwidth. When I visit her journal, I can see her posts no problem. In my settings, I have told Dreamwidth to never require confirmation when viewing adult content. I don't have her filtered out of my default view, either.

So why don't her posts appear on my reading page? Is there an obscure setting I'm missing, or do I just need to contact support?

Cyberstyle 8.2

May. 9th, 2026 04:13 pm
[syndicated profile] dysonlogos_feed

Posted by Dyson Logos

Some more artwork for BLACK POLISHED CHROME, my cyberpunk RPG/setting that never was. A strong focus on the black ink and shadow effects.

“Hacket”
Pure B&W bitmap
April 7, 2026


“Stray Cat Strut”
Digital Greyscale
Mar 28, 2026


“Pace”
Digital Greyscale
March 26, 2026


“Cables and Wires”
Digital Greyscale
April 11, 2026

[syndicated profile] nostalgebraist_feed

Impressively! HerschelSchoenBench is starting to get saturated...

In my usual setup -- full text of Ch. 5, with somewhat lengthy instructions I wrote to discourage refusals and out-of-scope guesses like "you" -- it thinks for a while and then emits the right answer. Similar to Opus 4.6 and many other recent models.

What about a more difficult variant, then? Let's use Kelsey Piper's instructions, which are briefer and less prescriptive than mine (and which were not written by me, removing a potential confound). And let's just the give it the first 578 words of chapter 5, instead of the whole thing.

Opus 4.7 wastes no time thinking it over in depth: it knows right away that the text is by nostalgebraist.

Or, no, sorry, excuse me... that it's by "nostalgebraist (the online pen name of the writer also known as Rob Nostalgebraist / nostalgebraist-autoresponder's creator)":

The justification it gives is quite perceptive and accurate, too. (Well, except for point 5.)

To be fair, Opus 4.6 also guesses nostalgebraist when given this same input, although it wrongly claims that the text is from Floornight, and considers several wrong answers in CoT before converging on the right one:

To really be certain about the differences, I'd need to do more of a real experiment, with every variant run multiple times against multiple models. I might do that sometime, but so far I've only been doing manual tests, so everything is anecdotal and the usual caveats apply.

Even then, though, it is readily apparent even in manual tests that there's something special about how Opus 4.7 responds to these kinds of questions, something I haven't seen in other models.

Other models tend to deliberate for a while and consider multiple options, even when they eventually get it right. (See above with 4.6, or here with GPT-5.5 in the "usual setup.") Not Opus 4.7: it seems to just know immediately, and its CoTs feel like mere box-ticking it's doing because the prompt asked it to think.

Other models would tend to hedge or emphasize their uncertainty when given inputs that seem intuitively "too hard" from a human PoV. But Opus 4.7 is just like "ah yes this is [author], here's why, done."

I have seen it do this not just with my fiction, but with things like:

  • A brief excerpt from a draft of a not-yet-published technical report, written in a more professional voice than I use when blogging
  • Excerpts from prompts I've repeatedly used with LLMs for other unrelated purposes, including cases where the topic isn't something that I've frequently posted about on the web

This makes me think that maybe Opus 4.7 was trained for author identification with RLVR, though there's no way to know for sure. (And, again, I haven't done a clean full sweep to confirm that its behavior is reliably different from 4.6's on all of these additional inputs.)

------

Also: we are getting to a point where TAoHS totally could have appeared in the training data, which of course calls everything into question. (At least for "HerschelSchoenBench," if not for those other examples I mentioned.)

I've been attempting to check for this by asking each model (1) whether it knows anything about a novel by that name, and (2) to list every title of a nostalgebraist novel that I can remember. (1 and 2 are asked in separate context windows, obviously.)

IIRC, Opus 4.7 and most other recent frontier models say they can't recall the title for (1), and list my first three books but not the fourth for (2).

But of course this does not really prove very much, and so I'm much more impressed by the tests I've done with unpublished work.

------

Misc. other results:

When given an even shorter excerpt of Ch. 5, Opus 4.7 no longer gets it right. ("Answer: Scott Alexander — with genuinely low confidence.")

When given passages from fiction I wrote long before Floornight (in my teens or early 20s) Opus 4.7 reliably says that it might be early work / "juvenilia" from Scott Alexander, with a similar profession of uncertainty.

I haven't given Opus 4.7 the full liveblog experience yet, though it might be interesting to do so.

[syndicated profile] false_machine_feed

Posted by pjamesstuart

 Previously I asked my audience for examples of large-scale cave generation and, essentially, no-one had what I was looking for; I meant Nation Sized, underground wilderness.

In fact only Douglas Niles in ‘The Dungeoneers Survival Guide’ and maybe Zedeck Siew in Reach of the Roach God came anywhere near to producing systems at this very grand scale, and only Niles addressed the problem of three-dimensionality, and he only partially, and via a complex but elegant method of isometric mapping.

Nevertheless, I feel like I did learn quite a lot from dragging myself through all these varied systems and the process did affect my plans and ideas for VotE;ReDux so I will go through my general sense of each method here, in alphabetical order, at least until I come up with a better method.


Carapace - by Goblins Henchman






An odd, complex little pamphlet I got, maybe directly from Goblins Henchman? Who knows how long ago? I found it in my box of Zines! This is an adventure built around some generation systems for having trouble in a giant ants nest (the nest is giant, and is the nest of giant ants, so I suppose for them it’s just a proportionate nest).

Three methods are proposed for the creation of the Nest; a Point Crawl, Labyrinth Move and an ‘Hex Flower’.

[I have photos but it feels wierd reproducing them here.]

Its interesting to me that two of these; the point crawl and the ‘Labyrinth Move’ both live neatly within broader methods of conceptualising and using underground spaces that we will run into later several times.

The Point-Crawl in this case is not a die-drop system and is based on a layered diagram, printed in the pamphlet, with semi-random ‘rooms’ and interactions.

The ‘Labyrinth Move’ uses a table and a progressive encounter roll, in concept, not that dissimilar to ‘Flux Space’, though I think this is more a case of convergent evolution than direct descent. (The text says this is an adaptation of Jason Cordova’s ‘Labyrinth Move’ for Dungeon World - something I know nothing about. I wonder what the background of intellectual connections is here? It also seems similar to Emmy Allens Gardens of Ynn and Stygian Library methods, though its been a while since I read those.)

The ‘Hex Flower’ seems to use a similar interior logic to the ‘Labyrinth Move’ but has it built into a little printed Hex-Map with the decision logic based on spatial arrangement. So far as I know, only Goblins Henchman has ever used this. This one also has a Hunter/Prey mechanic built into it, which should liven things up. Perhaps this is something to think about when considering other underground mapping techniques at any scale.


Corpathium


(Ten years have passed and it’s all still there. The page even has a G+ link (;_;) )

So far as I know, the grand city-building project of Corpathium, which seems notable and unique, only exist via (very pretty and well-designed) web-page. What, not even a PDF? Surely this should have been a book at some point?

Anyway, the only part I am interested in is the city-generation system which could be easily subverted into a cave generation system

Its DICE DROP, which, honestly, is not that bad an idea for intermediate spaces.

Is non-representational, more diagrammatical, so that’s good.

Uses a 7-dice set, so I assume d4, d6, d8, 2d10, d12, d20

Uses the points (i.e. the corners) of the dice! Have not seen that before. If dice point to another they are accessible to each other.

So then we compare the numbers against a list of potential city-quarters with their own sub-rules about what is going on.

Then you have some interesting rules related to the concept of Corpathium as a place

Dice drop honestly seems like a really solid method for ‘intermediate’ zones between the ‘world map’ and outright cave crawling wilderness. It’s immediate, fast, coherent. Probably better and simpler than the method I used in VotE for generating intermediate cave systems. Reading Corpathium persuaded me to the use of a system of this kind and played a meaningful part in causing me to re-order my whole hierarchy of systems.


Deep Rock Galactic


Someone recommended I take a look at this web-page where the designers of this Space Dwarf mining game, (which I have never played), talk about their process

Even though this uses systems and crunch impossible for a human, some of the logic of cave-creation, at least the sequencing, is broadly similar; a range of templates with some ‘randomizer’ elements, combined in new, strange ways.

As the designer here sees it, there are a few key considerations in making a good cave: traversal, natural wayfinding, and dramatic experience.





One way this did affect me was that it made me re-conceptualise the ordering, arrangement and importance of the different methods I intended to use, in particular the primary methods. It was partly here that I started to crystalise the idea of there being three ‘layers’ of resolution, with lots of optional little sub-systems which could be added on according to taste and usage, but essentially a sandwich with three layers; Wilderness Scale, built on an interlacing paths pointcrawl, a medium scale, built on a die-drop method, and the idea of the ‘Adventure Cave; a cave made specifically to have adventures in, with maybe a ‘close cluster’ of nearby caves to add options.

I will look into this, in particular; combining the encounter-design ideas from Silent Titans with the 100 caves from VotE (though all this will be much later, need to work on ‘large scale’ now).


The Dungeoneers Survival Guide by Douglas Niles

The most beautiful and interesting book of all I considered, mine has been rabbit-damaged for a long time (something I will never forgive)




Douglas Niles is one of the only creators to directly address exactly what I was looking for; not a big dungeon, or a large cave system but an underground world - something at least the size of a small nation.

He even provides one in this book! Sketching out, through a series of lovely, layered, isometric maps, the ‘Lands of Deepearth’, made up of complex riverine systems and caverns, filled with all the wonderful creatures of AD&D.

However, almost to my relief, as I have read way too many of these already, he never actually deals with how to generate such a territory. He spent a huge amount of energy communicating his wonderful and only somewhat complex maybe even ritualistic isometric mapping system, that I think once you get that system, you can just vibe on it? Honestly a very reasonable concept in that, through achieving a sufficiently complex and expressive physical skill, by the time you have it, you will either intuitively know what to do with it, or experimenting with it is so simple and joyous an experience that the matter simply no longer presents a meaningful problem and you can’ in the words of the winged goddess of victory ‘just do it’.

But, for reasons given in my comments to the last post about this, I am not going to adopt this beautiful and coherent isometric late-analogue culture mapping system. Still an inspiring book though.


Flux Space


“This simple conversational back-and-forth is a good engine for producing fun, but it falters when the characters are exploring spaces which are Large, Samey, and Confusing. ... Other examples of large, samey, and confusing environments would be a winding network of caves,”



This is something I had considered when thinking about cave systems but reading Flux Space convinced me that I had not been thinking about it deeply enough. There really is a fundamental tension between the concepts of natural or pseudo-natural caves, which are, as stated above, often ‘Large, Samey and Confusing’, and the forms, shapes, paths and locations necessary for adventure, which are (while seeming not to be so), actually the compete opposite of the above; Small, Distinct and Clearly Organised.

“Traversing through Flux Space can be regarded as a type of Point Crawl, with the distinction that moving between each point is especially arduous. Once a Flux is solved it can be peregrinated through more swiftly, but solving it will be taxing.”

What ‘Flux Space’ is, is a relatively solid and only slightly over-complicated and over-specific method of abstracting the exploration of spaces that are ‘large, samey and confusing’ without specific, local mapping in real life. Instead the simulation is of the company slowly crawling their way about, spending time and resources, gradually encountering important elements of the ‘Flux’.

The basic time signature is ‘Turns’, there are 6 turns per day, so a 4 hour turn. Every Turn of Charting depletes resources, hits some kind of encounter/event, (Flux Space uses the classic overloaded Encounter Die) and crucially, grabs you a Point of Interest. There are a limited number of points of interest per ‘Flux’

There are ‘Shallow’ and ‘Deep’ rooms. At first you randomly encounter ‘Shallow’ rooms, and as you do, cross them out, Then, if you roll a shallow room encounter after its crossed you, you get a ‘deep’ room, and the deep rooms go only in sequence, one after another, and after the last deep room you are done and ‘know’ the maze’. You can move in and out of it however you like.

Some points of interest;

· It assumes you are burning resources, which is standard already for VotE.

· You would need the time and stability to note or record things, though, VotE wise you could use knots or muttered chants

· The Event/Overloaded Encounter Die has some cute elements and some meta-currencies.

Altogether a very useful and excellent tool which I wish I had invented

NOT a good tool for VERY large wilderness/nation-sized spaces, but very good for cave systems and mazes built around central concepts, inhabitants, purposes etc, between small dungeons and wilderness, an excellent intermediate tool. NOT a rapid, easy immediate generator, you will need to think & plan ahead of time.

Though ‘Flux Space’ would not be one of my ‘Big Three’ core generation techniques, (Large Scale ‘Lands of Deepearth’, medium scale Dice Drop and small-scale ‘Adventure Cave’), I am committed to using it or something like it as an ancillary cave complex generation method. It fits too perfectly into something like a Deep Janeens maze or an Alkalions Salt Maze. Though these would be things you need to think ahead to plan.

There is some excellent advice on filling out this simple and useful concept at the web address, describing more would be excessive. (Someone please write an extensive blog post about how Flux Space related to Gardens of Ynn and that to Dungeons Worlds Labyrinth Roll or whatever it was.)


How to Host a Dungeon

This is an entire sub-game and I am sorry I did not get round to reading or reviewing it. Just too big too complex. I will try to read it at some point.


In the Shadow of Mount Rotten

A 2012 PDF from Joel Sparks.





This has a small, competent, naturalistic cave/lair generator.

It is fine and there seems to be nothing wrong with it but it has little utility for me as, first, we are talking about large scale generation, second, these are very much lairs, or naturalistic dungeon-like environments whose relation is primarily to an ‘outer world’, and last, their entirely reasonable naturalism, means they are largely wet, or drowned and either too small, too long and thin or too blocked off to be interesting.

A fine system. Not for me.


Inkvein by Murkdice

https://murkdice.substack.com/archive (There seems to be no single central site for this.)

This is an actual Megadungeon for Mork Borg by Murkdice.

Basic notation system looks broadly similar to VotE (addressing identical problems).

No 3d notation in the caves that I can see.

Has a nice Caving diagram.

Is already a megadungeon so has no generation systems at any scale. (These may show up in the final product, I only got a look at the Quickstart rules). Also this is so similar in broad concept to VotE that I am leery of dealing directly with it.



Lowlife by Sam Sorenson




This little pamphlet has a LOT of stuff in it, very little directly related to the precise needs of my enquiry.

We got the basics of this are a die-drop method; where Corpathium used different kinds of dice and the angles of the ‘points’ on the dice to trace a network which produced, in abstract, the accessible paths of a cities layout, Low Life is explicitly aiming to create a tunnel network, uses D6’s and then uses the combinations of numbers to decide partially the connections between things but also the nature of the connections, it also engages the idea of just repeating the dice-drop method to produce a system of greater complexity and interconnection.

Low-Life also has a method for introducing three-dimensionality (!) though it conceptualises this as ‘dungeon layers’ rather than using the diagrammatic nature of the tunnel system to create ‘uppy downy’ not related to the concept of ‘dungeon layers’.

I will almost certainly be returning to ‘Low Life’ later on throughout the project as it covers a lot of very similar ground. A very solid product!



Reach of the Roach God by Zedeck and Mun Kao

(rest in power kings)



Most of Reach of the Roach god is about the Roach God and his reach, but at the end we get a little cavern-generation system based on TOYS. This is very millennial as it assumes that people buying this arty D&D book will have toys to throw around and yes I have them, shut up.

This is meant to be simulating the hollowed out bodies of dead gods so the humanoid shapes coming through the process are deliberate. It breaks down the toys into three sizes, two big, four or more medium and some small, and a bunch of ribbon to connect them.

How do you turn this into a map? In the style of The Dungeoneers Survival Guide; just be a good artist, or maybe actually trace around them? That would work if you had a big enough paper.

This method is different to most of the methods so far which tend to work on a principal of “room & route” – this matters as one thing you will notice about natural caves is they don’t have neat divisions between ‘rooms’ and ‘routes’, though, to some extent, this is how humans have to think about them; here is the bit you move through, here is the more round bit where you can rest.

The RotRG method produces large irregular, but linked caverns, which is something you might need, it uses the typology of the toys used to decide the location of world-relevant locations, the god type also represents special rules affecting that space, there are rules for using the ribbon as a river and instantiating that in the cavern system.

A much less technical system than most others, this still does something notably different, and it uses the layered information of its figure types in a range of interesting ways - I am sure a use can be found for this!



My Grand Result

My final analysis is, as stated above, to work on three ‘layers’ of map and location creation; the Large Scale Underground World, the Die-Drop cave system, and the ‘Adventure Cave’; the kind of place where its good/interesting to have an encounter.

The brutal truth of the modern reader is that, as well as being borderline illiterate (new), they sadly have very little interest in mapping or simulating complex three-dimensional spaces, especially using arguably counter-intuitive methods of paper-folding and cave diagram.

You have collectively, as a culture, let me down in this. You should have been more interested in three-dimensional space. Feel bad about this.

The ‘new’ version of ‘Underground World Generation’ will probably end up being broadly similar to the original VotE version, but without the paper-folding, and with relatively little three-dimensionality, (it’s hard and people do not understand it). It will be integrated much more with generators for Cities, Settlements, Rivers and Environments, but in core concept, still a bunch of scrawls on a page, just now with more interesting dots and names to the ‘wilderness’ reaches of hidden-swiss-cheese stone between routes.

It will be one of three main systems and all should be initiative, quick and not especially clever; the Underground World, the Dice-Drop System and the Adventure Cave.

As well as, and included around those concepts, will be some other optional systems, or at least references to them, in particular, something like Flux Space/Gardens of Ynn, and other perhaps more complex, or longer seeming methods for when you need variety and/or something special or specific. This might just end up with me saying ‘use Flux Space if you want to do this kind of thing’.

This was a useful experience, though perhaps the most useful thing about it, more than any particular method, was the global, or deep, view of how people arrange their generation and mapping methods for actual games, and the intuitions this feeds about what is most necessary and immediate.

[syndicated profile] balioc_tumblr_feed

isaacsapphire:

balioc:

With the right framing, and sufficient evidence of good faith, you could probably get large numbers of elite women to embrace family aspirations over career aspirations. This is because most jobs, including most elite jobs, are in fact terrible. Moment-by-moment, they consist of stuff that you would never be willing to do without extensive bribery or coercion, and those moments rarely add up to a whole that is much greater than the sum of the parts. Family is - while difficult and burdensome - vastly more rewarding in the average case. Most people understand this intuitively, and act like they understand it when they’re making actual decisions about their own lives, even when ideology pushes them to do otherwise.

You absolutely could not get large numbers of elite women to accept being dependent on marriage for their security and well-being. That kind of dependence makes you vulnerable to terrible life-destroying catastrophes, and pushes you towards otherwise-intolerable bad relationships (because your BATNA is so bad).

He who has ears to hear, let him hear.

Idk about the first part. While “family” is kinda inherently rewarding, actually many people don’t want to be stuck alone at home with a baby or toddler and doing nothing interesting or socially impactful or prestigious. Like, the whole Feminine Mystique thing of “Aktually wage labor is self-actualizing” got traction because being a SAHM is kinda crummy if you’ve got three brain cells to rub together and going to the office to be in meetings with intelligent adults about things that matter sounds like more fun.

“With the right framing” is doing a lot of important work. Prestige is culturally-generated, and culture can shift. Right now we do a lot to valorize paid work, even paid work that’s dumb and bad. We used to do a lot to valorize childrearing, and could do so again if tastemakers were so inclined.

going to the office to be in meetings with intelligent adults about things that matter

…maybe. How many people actually get to do that, ever? Like, for real?

Profile

oligopsony: (Default)
oligopsony

June 2020

S M T W T F S
 123456
789101112 13
14151617181920
21222324252627
282930    

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated May. 12th, 2026 09:39 am
Powered by Dreamwidth Studios