bitcrafter

joined 1 year ago
[–] [email protected] 1 points 2 weeks ago

Uh... That wasn't quite what I had in mind for it either...

[–] [email protected] 3 points 2 weeks ago (3 children)

Keeping it as a pet is not quite the fate I had in mind for it...

[–] [email protected] 3 points 2 weeks ago (5 children)

Fair enough, but if the fawn is just there for the taking anyway...

[–] [email protected] 9 points 2 weeks ago (13 children)

In fairness, the deer population is way out of control, so I'm just doing my part to reduce it.

[–] [email protected] 2 points 2 weeks ago (1 children)

It's not really an architecture that is intended to map into anything in existing hardware, but having said that, Mill Computing is working on a new extremely unconventional architecture that is a lot closer to this; you can read more about it here, and specifically the design of the register file (which resembles a convener belt) is discussed here.

[–] [email protected] 5 points 3 weeks ago

In fairness, the holodeck computer seems generally prone to go way overboard when constructing extrapolations of people, since that is how it also gave us a fully sentient Moriarty.

[–] [email protected] 1 points 3 weeks ago

I created a script that I dropped into /etc/cron.hourly which does the following:

  1. Use rsync to mirror my root partition to a btrfs partition on another hard drive (which only updates modified files).
  2. Use btrfs subvolume snapshot to create a snapshot of that mirror (which only uses additional storage for modified files).
  3. Moves "old" snapshots into a trash directory so I can delete them later if I want to save space.

It is as follows:

#!/usr/bin/env python
from datetime import datetime, timedelta
import os
import pathlib
import shutil
import subprocess
import sys

import portalocker

DATETIME_FORMAT = '%Y-%m-%d-%H%M'
BACKUP_DIRECTORY = pathlib.Path('/backups/internal')
MIRROR_DIRECTORY = BACKUP_DIRECTORY / 'mirror'
SNAPSHOT_DIRECTORY = BACKUP_DIRECTORY / 'snapshots'
TRASH_DIRECTORY = BACKUP_DIRECTORY / 'trash'

EXCLUDED = [
    '/backups',
    '/dev',
    '/media',
    '/lost+found',
    '/mnt',
    '/nix',
    '/proc',
    '/run',
    '/sys',
    '/tmp',
    '/var',

    '/home/*/.cache',
    '/home/*/.local/share/flatpak',
    '/home/*/.local/share/Trash',
    '/home/*/.steam',
    '/home/*/Downloads',
    '/home/*/Trash',
]

OPTIONS = [
    '-avAXH',
    '--delete',
    '--delete-excluded',
    '--numeric-ids',
    '--relative',
    '--progress',
]

def execute(command, *options):
    print('>', command, *options)
    subprocess.run((command,) + options).check_returncode()

execute(
    '/usr/bin/mount',
    '-o', 'rw,remount',
    BACKUP_DIRECTORY,
)

try:
    with portalocker.Lock(os.path.join(BACKUP_DIRECTORY,'lock')):
        execute(
            '/usr/bin/rsync',
            '/',
            MIRROR_DIRECTORY,
            *(
                OPTIONS
                +
                [f'--exclude={excluded_path}' for excluded_path in EXCLUDED]
            )
        )

        execute(
            '/usr/bin/btrfs',
            'subvolume',
            'snapshot',
            '-r',
            MIRROR_DIRECTORY,
            SNAPSHOT_DIRECTORY / datetime.now().strftime(DATETIME_FORMAT),
        )

        snapshot_datetimes = sorted(
            (
                datetime.strptime(filename, DATETIME_FORMAT)
                for filename in os.listdir(SNAPSHOT_DIRECTORY)
            ),
        )

        # Keep the last 24 hours of snapshot_datetimes
        one_day_ago = datetime.now() - timedelta(days=1)
        while snapshot_datetimes and snapshot_datetimes[-1] >= one_day_ago:
            snapshot_datetimes.pop()

        # Helper function for selecting all of the snapshot_datetimes for a given day/month
        def prune_all_with(get_metric):
            this = get_metric(snapshot_datetimes[-1])
            snapshot_datetimes.pop()
            while snapshot_datetimes and get_metric(snapshot_datetimes[-1]) == this:
                snapshot = SNAPSHOT_DIRECTORY / snapshot_datetimes[-1].strftime(DATETIME_FORMAT)
                snapshot_datetimes.pop()
                execute('/usr/bin/btrfs', 'property', 'set', '-ts', snapshot, 'ro', 'false')
                shutil.move(snapshot, TRASH_DIRECTORY)

        # Keep daily snapshot_datetimes for the last month
        last_daily_to_keep = datetime.now().date() - timedelta(days=30)
        while snapshot_datetimes and snapshot_datetimes[-1].date() >= last_daily_to_keep:
            prune_all_with(lambda x: x.date())

        # Keep weekly snapshot_datetimes for the last three month
        last_weekly_to_keep = datetime.now().date() - timedelta(days=90)
        while snapshot_datetimes and snapshot_datetimes[-1].date() >= last_weekly_to_keep:
            prune_all_with(lambda x: x.date().isocalendar().week)

        # Keep monthly snapshot_datetimes forever
        while snapshot_datetimes:
            prune_all_with(lambda x: x.date().month)
except portalocker.AlreadyLocked:
    sys.exit('Backup already in progress.')
finally:
    execute(
        '/usr/bin/mount',
        '-o', 'ro,remount',
        BACKUP_DIRECTORY,
    )
[–] [email protected] 5 points 3 weeks ago (3 children)

The IR is designed to be easy to optimize, not easy for a real machine to execute. Among other things, it assumes it has access to an infinite number of registers so that it never needs to (and in fact is not allowed to) write a new value into a previously used register.

[–] [email protected] 2 points 3 weeks ago

Ah! Yes, I see where you were coming from now.

[–] [email protected] 2 points 3 weeks ago (2 children)

Just to be clear: my criticism is not that the other commenter was lying or being disingenuous about their own experiences, but that they made sweeping generalizations in their comment.

[–] [email protected] 7 points 3 weeks ago

It seems a little weird to compare them, given that GIMP is primarily for editing bitmap images and Inkscape is primarily for editing vector images.

[–] [email protected] 2 points 3 weeks ago

Because people with the free time to do so have already come together and organized themselves around a single Linux distribution for this purpose?

view more: ‹ prev next ›