Thursday, December 16, 2021

SD Card File Transfers Done Carefully

SEE WARNING BELOW, DON'T DO WHAT I SUGGEST IN THIS POST!

 With USB2 SD card readers there really isn't any need to slow them down, they are already not fast.  But if you are like me and you like fast file transfers you probably looked for and found a USB3 SD card reader.  

Unfortunately some combos of SD cards and SD card readers bog down real bad when large transfers are done at high speed. On Linux, this is the solution I use:

tar -cf - {source_files} | pv -q -L {transfer_rate_limit} | tar -C {destination_files} -xvf -

The {transfer_rate_limit} is specified in bytes/sec; a suffix (k, m, g, or t) can be added to the end for specifying KiB, MiB, GiB, or TiB's per second. 

I got this from Matt on stack exchange.

Another use for this: If the transfer is being done between two modern computers, you can speed up the transfer by adding encryption at the source and decryption at the destination, the 'z' is for gzip which is fairly fast:

tar -czf - {source_files} | pv -q -L {transfer_rate_limit} | tar -C {destination_files} -xzvf -

lz4 compresses smaller and much faster, as show by CatchChallenger, but make sure it's installed and supported by your version of tar: 

tar -I lz4 -cf - {source_files} | pv -q -L {transfer_rate_limit} | tar -C {destination_directory} -I lz4 -xvf -

It's a long command with lots of fiddly bits; this is exactly how UNIX was initially designed.  It was supposed to be a system where you can string fairly simple programs together to accomplish complex tasks.  Check out Brian Kernighan talking about it on YouTube.

I haven't figured out what the best speed to transfer at is, I assume it depends on your card and card reader.  The program "time" will measure how long the command takes to run:


anon@grayghost:~$ time tar -I lz4 -cf - ToDo.txt | pv -q -L 4096 | tar -C /home/anon/Sy/ -I lz4 -xvf -
ToDo.txt

real 0m0.015s
user 0m0.009s
sys 0m0.016s
anon@grayghost:~$ time tar -I lz4 -cf - ToDo.txt | pv -q -L 1024 | tar -C /home/anon/Sy/ -I lz4 -xvf -
ToDo.txt

real 0m0.287s
user 0m0.016s
sys 0m0.019s
anon@grayghost:~$

 Neat!

Update; works great with making ISO images with dd:

dd bs=1M if=[image_name] | pv -q -L 10M | dd of=/[your_director_here]
(I haven't tested that dd command YMMV)

I just ran into a problem.  The contents of my SD card was corrupted.  Transferring data slowly off of the card actually heated it up to 164°F, while just letting it run was only 99°F.  I don't yet know if the heat and transfer speed was source of the problem.  Until I can figure it out avoid, or at least be a bit cautious, using this technique!



Wednesday, December 15, 2021

GRUB2

I've always loved GRUB.  It's not easy to use, but man it works well, usually the setup is automatic, but maybe a little scary if you have to do something manually.  But then it's setup and it works for years.  

If/when GRUB fails it doesn't wreck your data, and can usually be recovered with a boot disk and a few commands.

Seriously love GRUB, and GRUB2 is dramatically better!

Yesterday I got tired of putting 2-4GiB ISO images on to 32GiB SD cards.  My friends were starting to wonder if I needed an intervention:

 So I started looking at tools to put multiple ISO's on to one large bootable SD card, and simply choose which one you wanted to boot at boot time.  I found the excellent website of LinuxBabe.com who suggested MultiBootUSB  and MultiSystem - but both of those appear to be abandoned projects from some time ago.

Someone on Discord suggested PLOP. Truly hilarious name.  But that also looked really sketch.  Then I found another LinuxBabe article;  apparently GRUB2 can boot directly to ISO's!!

So I made these partitions on a USB stick: 

Number  Start   End     Size    File system  Name      Flags
 1      1049kB  2097kB  1049kB               bios
 2      2097kB  68.2MB  66.1MB  fat32        efi_boot  boot, esp
 3      68.2MB  128GB   128GB   ext2         iso_cube

The first partition is a BIOS protective partition and has no file system, some old computers or misbehaving software will overwright the beginning little bit of a disk.  1 MiB is excessive, but it also helps ensure partitions are 4k block aligned, and I don't care about 1 Mib.

Mounted new filesystems:

mkdir /mnt/efi_boot
mkdir /mnt/iso_cube
mount /dev/{usbstick}2 /mnt/efi_boot
mount /dev/{usbstick}3 /mnt/iso_cube
mkdir /dev/efi_boot/boot
 

Installed GRUB2 onto the USB stick (this was on an Ubuntu system, YMMV):

grub-install --efi-directory=/mnt/efi_boot --boot-directory=/mnt/efi_boot/boot --removable

Booted to the USB stick, and then proceeded to boot directly to an ISO file.  The 'ls' commands are me looking for files, GRUB names disks in a way that is easiest for GRUB - we just have to deal with it:

ls
ls (hd0,gpt2)/
ls (hd1,gpt3)/
set isofile="/ubuntu-20.04.3-desktop-amd64.iso"
loopback loop (hd1,gpt3)$isofile
ls (loop)
ls (loop)/
linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=$isofile quiet noeject noprompt splash
lnitrd (loop)/casper/initrd
boot

Victory!

So LinuxBabe showed how to add ISO entries to the GRUB menu so you don't have to memorize and type a bunch of commands every time.  After I add a bunch more ISO's and test them out I expect I'll probably do that.  For now I just made a file on the usb stick with the commands I typed.  The grub command line has 'cat' so I can read my instructions, I don't boot from USB very often, so I might just leave it.  Having to use the GRUB command line every once and a while will go a long way to helping me learn it. 

Monday, December 13, 2021

tar

I regret what I wrote below.  It turns out that tar has gotten easier over the years, it now supports the origional unix style arguments and some easier to understand "GNU" style arguments.  All I had to do to extract my tar file was:

tar --extract --file=$filename

I need to be less grumpy.  Tarball creation is easier now too:

tar --create --file=$newtarball --$options $files

compression options; pick one: --[bzip2,xz,lzip,lzma,lzop,gzip,compress,zstd]
 

other options:
  --label=$TEXT
  Handling of file attributes - (there are a lot of confusing options here, I'm only listing a few that I can see might be useful to me:)
  --sort=[none,name,inode] (inode is a performance tweak)
  --atime-preserve
  --preserve-permissions
  --preserve-order
  --format=$tldr
      (there are some archaic (?) formats supported, the important thing to know is that tar itself has two formats tar<=1.12.x  and tar=>1.13.x - hopefully I never have to care about this; but it seems like something that could bite the llamas ass.)
  Device blocking options - (these are options that you don't need until you really need then, and then you find out that good-ol-tar is [hypothetically] the only archive format that can handle your input...)
  Device selection and switching - (these options seem to have everything to do with tape backups.  I have three Tandberg tape libraries.  I want to learn how to use amanda. I don't want to use tar to make multi-volume tape backups.  But it's good to know I could; but I'd never be able to find anything without some sort of very complicated index.)
    Extended File Attributes - ACL's, SELinux, and xattrs options.
 

I still think we need a revolution in open source usability.  But; we're not going to be there by spreading grumpiness.  - Perhaps I should limit the number of blog posts I write at 2:45AM.  :-)

I've come to dread every tar ball download.  Because I know I'm going to sit there for at least 10 minutes desperately scrolling through the contents of 'man tar'  trying to figure our how to extract the files.

I still remember how to use pkzip and pkunzip, version 2.04g - their help files were SOOOOO easy to read and understand.

Why does tar have to be so horrible?  I can't even imagine how much more difficult it would be if I wasn't a native english speaker.  -- Although I suspect the man pages are probably smart enough to display help text in your language of choice; if such help exists.

Manual page tar(1) is 965 lines long.  At ~60 lines per page that's 16 pages.  There are about 4 million websites out there that have been written to answer the question "how to untar"

Open source software is CONSTANTLY shooting itself in the foot.  Because the philosophy is so attractive, so GOOD, uncountable human hours have been spent developing amazing open source tools, which any sane person would/should avoid like the plague, because they are just too complicated to use.

We need a revolution in open source, a usability revolution.

[Snort]  check this out...

"failed-read
                     Suppresses  warnings  about unreadable files or directories. This keyword applies only if used together with the --ignore-failed-read option."

So to suppress an unreadable file warning you have to use the --ignore-failed-read option and the --failed-read option?  I'm so confused and flabbergasted....   The madness just doesn't end!  These 16 pages are the BRIEF version!  Apparently I'm supposed to read the 'info' pages for the full version.


Computer Name Resolution, DNS and Friends - My Musings and Ramblings

 I intend this to be a blog post that I'm going to update, but it might make more sense to move this information to a personal knowledge web....   We'll see.

 DNS  -  You want to contact a computer to which you know the name of?  No problem.  The computer your using makes a request to a Domain Name Service, asking for an IP address for the server, then proceeds to connect to the server using it's IP address. - Simple!

Many internet fundamental technologies were created by very clever people in robust and simple ways and have either stood the test of time; or developed such a historical inertia that they had to be kept the same or else everything thing would break.  Many of these technologies have been extended and expanded in very clever ways to function better and more completely today then they ever have.

At first blush Computer Name Resolution doesn't seem to be one of these golden children.  Check out the Wikipedia History on DNS - In 1973 ARPANET used a hosts.txt file on each system, and apparently it was managed by Jake over at Stanford.  She managed all computer name resolution for ~17 years.  --  Stick that on your resume and smoke it.  --  Oh, and she invented Domains.  --  Although I 100% believe Jake was clever; the systems that she and her team put into place have not been robust.  The one constant for DNS seems to be that it constantly changes; and stays the same.

I imagine there is a ton of awesome history between 1989 and now.  But I'm trying to get my server to work, so I'm going to focus on today.  Today Jake has been replaced by a group known as the"Internet Assigned Numbers Authority" (IANA). 

Or excuse me maybe it's the Public Technical Identifiers (PTI) that actually run things, they are an affiliate of ICANN, contracted to preforn the IANA functions on behalf of ICANN.

Based on my messed up preconceived notions and the very few things I think I've learned about ICANN I believe them to be completely morally bankrupt.    --  Really, never in history has a bureaucracy been worse than ICANN.  As far as I know they are completely useless and  a massive detriment to society in general.  The only way to get anything done with ICANN is to provide nation state level bribes to it's ~388 employees. 

Jon Postel did it better in his spare time and without charging anything for his services.  

None of this should have happened, it's a modern tragedy. 

"Once you realize what a joke everything is, being the Comedian's is the only thing that makes sense."

—Eddie Blake

So forget it, lets move on.  Lets look at the technology, and see what's been done and where we can go from here.

If you have something on the internet, you probably started with getting a name from Jon/ICANN/PTI/IANA, lets call them JIPI.  JIPI authorizes registrars to charge you yearly for your name. 

Either those registrars, a hosting company, or you, must provide DNS servers to go along with your name.  You maintain 'records' with those DNS servers so that when someone requests information on how to contact your site the DNS server responds with an IP address, or such, which the requestor can use to contact the computer that is hosting your site.  "Site" in this case could be a web page, game, virtual world, or whatever...

I just looked up "josiahluscher.com" and a name server ns3.dreamhost.com replied with an 'A' record and the IP address 64.90.48.157

Neat eh?  DNS servers are hierarchical, and divided up into zones.  So if whatever DNS server you contact doesn't have an authoritative answer, it asks the lowest server that it knows will be able to find the answer....  That might be the root DNS server. The root server won't give an answer though, it just refers the requester to a higher level server that should have the answer.  this referral process may repeat several times. Finally the "authoritative" DNS server is found, and then you get an answer.

Obvious challenges to traditional DNS:

  1. Internet DNS, doens't know about local networks, so a local DNS is needed.
  2. Multiple computers serving many users who all expect to use the same service.
  3. Prevent malicious actors from replying with fake destinations to perpetrate man-in-the-middle attacks.
  4. Others?



Pieces of software that I want to learn about related to DNS:
nmcli - NetworkManager
systemd-resolvd
dnsutils
ifupdn
iproute2
resolvconf
dhclient
net-tools
mDNS

nmcli

In terms of ease of use and the help information available nmcli is one of those programs that give Linux a bad name. 

 [Good news though I did solve my immediate problem that inspired this post.  My eno1 wired gigabit ethernet interface was setup with a static IP and static DNS records which were no longer correct. The way to change that interface to DHCP and remove the old records is this:

nmcli device show eno1
less /etc/sysconfig/network-scripts/ifcfg-eno1

nmcli con mod eno1 ipv4.ignore-auto-dns no
nmcli device modify eno1 ipv4.method auto
nmcli device modify eno1 ipv6.method auto
nmcli con mod eno1 -ipv4.dns [Old.Incorrect.DNS.IP]
nmcli con mod eno1 -ipv4.dns [Old.Incorrect.DNS.IP]
systemctl restart NetworkManager

nmcli device show eno1
less /etc/sysconfig/network-scripts/ifcfg-eno1


To be continued someday....