• 1 Post
  • 94 Comments
Joined 3 years ago
cake
Cake day: June 6th, 2023

help-circle
  • Consider me enlightened! The recontextualization of the statement makes it make a lot more sense. As with many things, being swept into the tech sphere robbed it of meaning. I also think that the sentiment of “the tools used to build something are not the same tools which can effectively dismantle it” is true in many senses, not just in the context of social/politiical/institutional change.

    I was confused by the section “How Taking the ‘Master’s Tools’ Seriously Can Serve Enshittification”. It transitions from an argument that

    The early internet was structured around the assumptions of its architects: predominantly white, male, Western, educated, and abled

    (which is true), then links this group directly to Facebook. While these descriptors apply to both the founders of the internet and the founders of the tech giants, facebook is at least 15 years younger than the foundation of the public internet, and these two groups are both mutually exclusive and ideologically at odds. The author then goes on to use the social harms of big tech to push back against Doctorow’s first stage of enshittification, when the companies are “good”.

    I think this is a fundamental misreading of Doctorow. He has spent his career as a free software advocate, and claiming that the first stage of corporate capture of the internet is the ideal would be anathemic to his more general arguments. What he means by “good” here – and he says this frequently in public discussions on enshittification – is that the product does what it says on the box, with no BS. That people are tempted to use it because it allows people to access the internet without coming up against the sharp edges of the technology itself, and that is a reasonable compromise for many people at first, because it allows more people to access the internet.

    The article argues that in order to fully represent the experience of all stakeholders, the internet “getting worse” is an incomplete view, and to understand the impacts outside the white, male, etc. perspective, we should use the tools of decolonialism, which would be true if Doctorows project was a thorough sociological analysis of the impacts of technology. But it isn’t, it’s a rallying cry. The goal of his book is to make a coherent narative of the change in experience for consumers of technology over the era of Big Tech, and it does that. This is far from the only case where it leaves out strong tie-ins to other philosophical or sociological concepts, but there is a strength in a focused argument as well.

    It’s unsurprising that Doctorow misappropriated Audre Lorde’s words in their meme form, becuase that’s what the book is – an abbreviated, digestible approach to the topic. However, I’m glad that someone made those connections.










  • Well, first of all China does make lithography equipment (for instance, Shanghai Micro Electronics Equipment, who are currently at 28 nm). There are a couple of others iirc, and they typically got started by licensing lithography technology from Japanese companies and then building on it.

    The issue is mostly one of economics – fabs want higher-resolution lithography as soon as possible, and they only buy it once, which means that the first company to develop new litho technologies takes the lions share of the revenue. If you’re second to the technology, or are more than half a dozen nodes behind like SMEE is, theres not a lot of demand because there are fabs full of litho machines from when that node was new, and theres not as much demand for them anymore.

    The issue with a new company making leading edge nodes is the incredible R&D and development cost involved. Nikon, Canon, and ASML shared the market when they all started developing EUV tech, and it took ASML 15+ years to develop it! Canon and Nikon teamed up, spent tens of billions of dollars on R&D, and dropped out once they realized they couldn’t beat ASML to market because there wouldn’t be enough market left for them to make their money back.

    If you want to learn more about the history of the semiconductor industry, I recommend the Asianometry YouTube channel!




  • If you are truly starting from scratch, shooting for Raspberry Pi performance isn’t starting small, thats a huge goal. It’s a complex chip built on a fairly modern process node (28 nm for the 4B) using the second-best-established architecture.

    The reasonable goal to shoot for would be an 8086-like chip, then perhaps something like a K3-II or early Pentium, then slowly work your way up from there.


  • There are a couple of further questions to be able to answer this best. First, when you say using only tech that is in the open, nothing proprietary, how strictly do you mean that? Historically, what Chinese foundries have done is buy a fab line far enough from the leading edge to not be questioned, then use that as a starting point for working towards smaller nodes. If thats allowed, it would be fairly trivial, 40 nm doesnt perform that badly.

    If you want the equivalent of “open-source” fab equipment, as far as I know that has never existed. In better news, if you go back to DUV/immersion lithography, its not just ASML manufacturing lithography, Nikon and Canon were still in the game, so power was less centralized.

    Second, what is the actual goal? If it’s just compute, no big deal. As long as you can write a C compiler for your architecture (or use RISC-V as other folks have mentioned) getting the Linux kernel running shouldn’t be too hard. However, you’re going to have to deal with manually modifying the firmware of any peripherals you want to run – PCIe devices, USB, I2C, etc. Not a firmware engineer, so I have no idea how hard it would be, but this is one of the things that’s been holding back Linux on Arm over the years.

    All in all, depending on how strict you want to be, it could be anywhere between slightly difficult and effectively impossible.





  • Microwave scattering is an absolute nightmare over that kind of distance. Even for much shorter distances, microwaves are only practical to transport over a couple of meters in a waveguide.

    If its transmitting to a base station, we can assume it’s in geosynchronous orbit, or about 22,000 miles from the surface. With a fairly large dish on the satellite, you could probably keep the beam fairly tight until it hit the atmosphere, but that last ~100 miles of air would scatter it like no tomorrow. Clouds and humidity are also a huge problem – water is an exceptionally good absorber in most of the MW band.

    I saw numbers reported for the transmission efficiency somewhere (will update this if I can find it again), and they were sub-30%. The other 70% is either boiling clouds on its way down, or missing the reviever on the ground and gently cooking the surrounding area.