• 0 Posts
  • 51 Comments
Joined 2 years ago
cake
Cake day: July 23rd, 2023

help-circle
  • Emma_Gold_Man@lemmy.dbzer0.comtoaww@lemmy.worldIt really did
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    4
    ·
    edit-2
    4 days ago

    I would say exactly the opposite - it proves the point. The sameness of the two dogs and the lack of the corresponding marriage ceremony in the background rob the image of most of its significance, and the background is a copy that wouldn’t exist if the original hadn’t existed.


  • Advice from a long time sysadmin: You’re probably asking the wrong question. ncdu is an efficient tool, so the right question is why it’s taking so long to complete, which is probably an underlying issue with your setup. There are three likely answers:

    1. This drive is used on a server specifically to store very large numbers of very small files. This probably isn’t the case, as you’d already know that and be looking at it in smaller chunks.
    2. You have a network mount set up. Use the -x option to ncdu to restrict your search to a single filesystem, or --exclude to exclude the network mount and your problem will be solved (along with the traffic spike on your LAN).
    3. You have a single directory with a large number of small files that never get cleared, such as from an e-mail deadletter folder or a program creating temp files outside of the temp directories. Once a certain number of files is reached, accessing a directory slows down dramatically. The following command will find it for you (reminder - make sure you understand what a command does before copying it into a terminal, DOUBLY so if it is run as root or has a sudo in it). Note that this will probably take several times as long to run as ncdu because it’s doing several manipulations in series rather than in parallel.

    sudo find $(grep '^/' /etc/fstab | awk '{print $2}') -xdev -type f -exec dirname {} \; | sort | uniq -c | sort -nr | head

    explanation

    This command doesn’t give an exact file count, but it’s good enough for our purposes.

    sudo find # run find as root

    $( … ) # Run this in a subshell - it’s the list of mount points we want to search

    grep ‘^/’ /etc/fstab # Get the list of non-special local filesystems that the system knows how to mount (ignores many edge-cases)

    awk ‘{print $2}’ # We only want the second column - where those filesystems are mounted

    -xdev # tell find not to cross filesystem boundaries

    -type f # We want to count files

    -exec dirname {}; # Ignore the file name, just list the directory once for each file in it

    sort|uniq -c # Count how many times each directory is listed (how many files it has)

    sort -nr # Order by count descending

    head # Only list the top 10

    If they are temp files or otherwise not needed, delete them. If they’re important, figure out how to break it into subdirectories based on first letter, hash, or whatever other method the software creating them supports.


  • $126,500 per person, plus another $20,240 in housing expenses. Plus your $13,850 standard deduction (though if you’re making that much you’re probably itemizing for more). So $160,590 for an individual or $321,180 for married filing jointly. That’s assuming no kids and no other deductions or credits - which is pretty unlikely at that income level.

    $160,590 is the 93rd percentile for US income distribution. So yeah, if you (AND your partner, if any) are both in the top 7% income bracket, bad at tax preparation, and don’t hire an accountant, you might still pay tax on the income over that amount. Of course, making that much while keeping the kind of ethics that let you care about anyone other than yourself is a nontrivial endeavor.

    Don’t forget that your foreign employer won’t be reporting to the IRS. So if your protest extends to not voluntarily reporting that excess income …









  • The way this works in the server world is “95th percentile” billing. They track your bandwidth usage over the course of the month (probably in 5 minute intervals), strike off the 5% highest peaks, and your bill for the month is based on the highest usage remaining.

    That’s considerably more honest than charging you based solely on the highest usage you could theoretically use at any time point in a 24 hour period (which is how ISPs define the “max bandwidth”) and then charging you again or cutting off your service if you use more than a certain amount they won’t even put in writing.



  • Probably not. It looks like it’s setting the fake address before reading the tunnel parameters, where the real address is stored. Probably a kludge in case the connection address is undefined so the program doesn’t crash. So check whether the address is included there.

    Also check the function that establishes the connection. 10.1.1.1 is not a public subnet, so unless there is a VPN device listening at the local address, the tunnel should fail to establish and throw an error, triggering the exception clause in that code. Again, you’ll want to confirm that in the code.





  • Manually keying in the pin is only needed when plugging in the device. Challenges for TOTP, FIDO2, etc. are a configuration option, and are only 3 digits if enabled (press any button if disabled).

    As for “excessive amount of security”, security as an absolute measure isn’t a great way to think about it. Use case and threat model are more apt.

    For use case, I’ll point out it’s also a PGP and SSH device, where there is no third party server applying the first factor (something you know) and needs to apply both factors on device.

    For threat model, I’ll give the example of an activist who is arrested. If their e-mail provider is in the country, they can compel the provider to give them access, allowing them to reset passwords on other more secure services hosted outside the country. The police now have the second factor (something you have), but can’t use it because it’s locked.