Every community I care about is dead

  • 0 Posts
  • 56 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle



  • You can change the background color by changing the ["cre_background_color"] key in settings.reader.lua (again, I dislike needing to configure it like this). On my Android and desktop I set it to ["cre_background_color"] = "0xECECEC",, which inverts into a nice gray when I set it to night mode, then I invert all the image colors so they’re a normal color. Font color can’t be changed though, TMK. You can change font color with custom CSS snippets.



  • Have you tried KOReader yet? It’s not Material UI and doesn’t have any sort of “theme”, since it’s very focused on just showing your text, but it lets you extensively pick fonts and styles for your books, has dictionary lookups (tap and hold), page view, and it can sync with itself (available on the desktop and many physical ereaders). My main gripe is that it’s very configurable, and I don’t personally like many of the defaults. After setting it all up it’s quite powerful, and I use it on my physical ereader, Android phone, and desktop PC in roughly the same configuration.



  • Conduit is also licensed under Apache 2.0, so it could also be taken closed source at any point in time. The reason this wouldn’t impact Conduit as much is that there’re other contributors, whilst Synapse and Dendrite are almost exclusively developed by Element.

    Right. The current perspective is based on the idea that if Synapse/Dendrite go closed-source right now, an open source version would be good as dead. Element is responsible for 95% of Synapse/Dendrite and I’m sure a community fork would have to play a lot of catch-up to figure out how to keep it going. If the community was more involved in Synapse/Dendrite implementation (and if Element let them) there would be less cause for alarm, as closing the source would just mean an immediate community fork and putting Element on ignore. Also to reiterate, The Matrix Foundation is not going along with Element on this move, and even if Element pulled something shady the Matrix Core Spec etc. would still remain open and under the Foundation’s control, so the max we have to lose is Synapse/Dendrite and all of Element’s developers.

    As for the rest I agree and I do actually trust that Element is simply playing their only card here. These maneuvers are all required in order for Element to survive as a company at all, but they also unfortunately leave this backdoor open as a consequence. Matthew has pinky-promised over and over that they are only acting in good faith and that they would never use the backdoor, but it’s understandable that the presence of the backdoor is putting everyone at unease. Best case scenario we take this as a warning sign that if Element drops dead tomorrow then Matrix is also dead. If people want Matrix to not be practically owned by Element then we should diversify and prepare escape plans.



  • This is actually quite a controversial change mainly because of their switch to a CLA. This indirectly gives them the opportunity to switch the license to closed source whenever they feel like it in the future. Semi-controversially, they are also primarly making this AGPL change in order to begin selling dual-licensing to companies. The Matrix Foundation itself does not support this change from Element, though Element is within its rights to do so.

    You can read some more thoughts on this from the pessimistic folks at HackerNews. My main takeaway is that I don’t trust Element because I don’t trust anyone. I’m sure they’re doing this in good faith but I don’t like the power they have at the moment. I hope this is what’s needed to begin focusing efforts on alternative homeserver implementations like Conduit.



  • I prefer recertified ones if they’re significantly cheaper, but that’s up to you. Recertified will likely fail faster but when they’re close to ~60% of the cost it makes sense to gamble.

    As for which RAID that is up to you and how you’re setting up your array. If you’re running ZFS then mirrored pairs are somewhat flexible since you can add a pair whenever you want of any size disks, but they will cost you 50% of your disk space in redundancy. For RAID5/6 you want the disk sizes to match and for ZFS you won’t be able to add any disks to a RAID5/6 array for about a year - the code that adds that feature is coming in the next release which will take about a year.





  • You can also use MergerFS+SnapRAID over individual BTRFS disks which will give you a pseudo-RAID5/6 that is safe. You dedicate one or more disks to hold parity, and the rest will hold data. At a specified time interval, parity will be calculated by SnapRAID and stored on the parity disk (not realtime). MergerFS will scatter your files across the data disks without using striping, and present them under one mount point. Speed will be limited to the disk that has the file. Unmitigated failure of a disk will only lose the files that were assigned to that disk, due to lack of striping. Disks can be pulled and plugged in elsewhere to access the files they are responsible for.

    It’s a bit of a weird-feeling solution if you’re used to traditional RAID but it’s very flexible because you can add and remove disks and they can be any size, as long as your parity disks are the largest.


  • From reading some of the comments on this at HackerNews from others and from the CEO (username is Arathorn there), I think it is a change that needed to happen - though not an ideal one at first glance. I agree with twicetwice’s take most of all. For what it’s worth I 100% believe that they have everyone’s best interests at heart right now, and that they’re using the CLA to save themselves from getting buried by their proprietary opponents. I don’t make a habit of trusting anyone though, and I would really prefer to see this revisited in the future if at all possible. In the unlikely event that they flip the license to closed, I think the open-source side of Matrix should still live on through alternative implementations and nothing will be irreparably lost?


  • Mirrored vdevs allow growth by adding a pair at a time, yes. Healing works with mirrors, because each of the two disks in a mirror are supposed to have the same data as each other. When a read or scrub happens, if there’s any checksum failures it will replace the failed block on Disk1 with Disk2’s copy of that block.

    Many ZFS’ers swear by mirrored vdevs because they give you the best performance, they’re more flexible, and resilvering from a failed mirror disk is an order of magnitude faster than resilvering from a failed RAIDZ - leaving less time for a second disk failure. The big downside is that they eat 50% of your disk capacity. I personally run mirrored vdevs because it’s more flexible for a small home NAS, and I make up for some of the disk inefficiency by being able to buy any-size disks on sale and throw them in whenever I see a good price.