Stupid Unix Tricks

2019-10-1711:52427157sneak.berlin

The personal website of Jeffrey Paul.

These are my stupid unix tricks. I hope that they are useful to you.

Platform Note

I use Mac OS X (pron: “ten”). If you don’t, you might want to switch instances of ~/Library/ to something else, like ~/.local/.

Modular .bashrc

Before we begin, first note that bashrc refers to something that runs in each and every new shell, and profile refers to something that runs only in login shells (spawned by your terminal, not just a shell script, for example). They aren’t the same and you don’t want them to be the same.

A lot of software or configurations want to run some stuff each new shell. You want to add aliases and functions to your shell environment. Manually editing ~/.bashrc is a drag, as is grepping it to determine programmatically if it’s been modified in a specific way.

mkdir -p ~/Library/bashrc.d ~/Library/profile.d
touch ~/Library/bashrc.d/000.keep.sh
touch ~/Library/profile.d/000.keep.sh
cat > ~/.bashrc <<'EOF'
# do not edit this file. put files in the dir below.
for FN in $HOME/Library/bashrc.d/*.sh ; do source "$FN"
done
EOF
cat > ~/.profile <<'EOF'
# do not edit this file. put files in the dir below.
source ~/.bashrc
for FN in $HOME/Library/profile.d/*.sh ; do source "$FN"
done
EOF

Now you can use standard tools like rm and cat and cp and if [[ -e $HOME/Library/bashrc.d/111.whatever.sh ]]; to add/test/remove things from your shell environment.

Here are some of mine:

sneak@pris:~/Library/bashrc.d$ grep . 100*
100.caskroom-dest.sh:export HOMEBREW_CASK_OPTS="--appdir=$HOME/Applications"
100.gopath.sh:export GOPATH="$HOME/Library/Go"
100.homebrew-no-spyware.sh:export HOMEBREW_NO_ANALYTICS=1
100.homebrew-paths.sh:export PATH+=":$HOME/Library/Homebrew/bin"
100.homebrew-paths.sh:export PATH+=":$HOME/Library/Homebrew/sbin"
100.localbin.sh:export PATH+=":$HOME/Library/Local/bin"
100.localbin.sh:export PATH+=":$HOME/Library/Local/sbin"
100.yarnpaths.sh:export PATH+=":$HOME/.yarn/bin"  # homebrew's yarn installs to here

Prefix them with numbers so that they sort and run in order; e.g. you want your bin paths (for python, yarn, et c) added to your $PATH before you start trying to run things from within them.

Bonus points if you synchronize a directory (e.g. via dropbox/gdrive, or, better yet, via syncthing like I do). My ~/.bashrc actually contains:

# do not edit this file. put files in the dir below.
for FN in $HOME/Library/bashrc.d/*.sh ; do source "$FN"
done for FN in $HOME/Documents/sync/bashrc.d/*.sh ; do source "$FN"
done

This way I can add aliases and environment variables in ~/Documents/sync/bashrc.d/ and they magically appear on all of my machines without additional configuration.

Wrap Startup Commands

Wrap your startup script commands in checks to prevent errors if things aren’t installed or available, e.g.:

Don’t try to install things with brew if brew is not installed:

if which brew >/dev/null 2>&1 ; then brew install jq
fi

Change the GOPATH only if the directory exists on the machine in question:

if [[ -d "$HOME/dev/go" ]]; then export GOPATH="$HOME/dev/go"
fi

This way you can put things that need to happen into startup scripts (e.g. installation of jq for subsequent commands in the script to work) and they won’t error out if files or directories aren’t installed yet.

Another example (from ~/Documents/sync/bashrc.d/999.kubectl.sh):

This loads bash completion for kubectl, but only on systems that have kubectl installed and in the $PATH already.

if which kubectl >/dev/null 2>&1 ; then source <(kubectl completion bash)
fi

Use an HSM for SSH Keys

An ssh key on disk, even with a passphrase, is vulnerable to malware (malware can steal your files, and keylog your passphrase to decrypt them). Put your ssh private keys somewhere that software (any software, even your own) on your computer simply cannot access them.

The best way to store SSH private keys is in a hardware security module, or HSM.

I use a Yubikey 4C Nano (though the Yubikey 5C Nano is current now), via gpg-agent. I have one physically installed in each computer I regularly use, plus a few spares stashed in safe places offsite. I generated the keys on the devices, did not back them up at generation time (so now they can’t ever be exported from the devices at all), and each device has its own unique key. (They have the added benefit of serving as U2F tokens for web authentication, something you absolutely should be using everywhere you can.)

The gpg-agent is a small daemon that is part of GnuPG that runs locally and allows you to use a GnuPG key as an SSH key. GnuPG supports using a smartcard as a GnuPG key. Yubikeys can serve as GnuPG-compatible CCID smartcards. This means that your Yubikey, in CCID mode, can be used to authenticate to SSH servers.

To initialize a key on the card, use the instructions found at this guide. I do not recommend setting an expiration (as they suggest) and don’t put your real name/email on the GnuPG keys it generates, as these are not going to be used for normal GnuPG-style things.

The author of that tutorial has a slightly different (perhaps better) take on what to put in your .bashrc to use the card for ssh. Mine is below.

Set it up to use your GPG smartcard to authenticate to remote hosts by dropping a file in your modular profile.d directory:

cat > ~/Library/profile.d/900.gpg-agent.sh <<'EOF'
# check for existing running agent info
if [[ -e $HOME/.gpg-agent-info ]]; then source $HOME/.gpg-agent-info export GPG_AGENT_INFO SSH_AUTH_SOCK SSH_AGENT_PID
fi # test existing agent, remove info file if not working
ssh-add -L 2>/dev/null >/dev/null || rm -f $HOME/.gpg-agent-info # if no info file, start up potentially-new, working agent
if [[ ! -e $HOME/.gpg-agent-info ]]; then if which gpg-agent >/dev/null 2>&1 ; then gpg-agent \ --enable-ssh-support \ --daemon \ --pinentry-program $(brew --prefix)/bin/pinentry-mac \ 2> /dev/null > $HOME/.gpg-agent-info fi
fi # load up new agent info
if [[ -e $HOME/.gpg-agent-info ]]; then source $HOME/.gpg-agent-info export GPG_AGENT_INFO SSH_AUTH_SOCK SSH_AGENT_PID
fi
EOF

Once you have generated a key on your card and started the gpg-agent correctly, ssh-add -L | grep cardno will show you the ssh public key from the key on your Yubikey, e.g.:

sneak@pris:~$ ssh-add -L | grep cardno
ssh-rsa AAAAB3NzaC1yc2EAAAA....VCBZawcIANQ== cardno:000601231234
sneak@pris:~$

Save the GnuPG public keys from all the cards and export them as a single ascii-armored bundle (gpg -a --export $KEYIDS > keys.txt) which you save somewhere easily accessible. You can then use a tool like mine to easily encrypt data for safekeeping that you will always be able to decrypt should you have at least one of your HSMs around.

Two options, github or self-hosted. The github option has some drawbacks, but is fine for most people (other than the fact that you should not be using GitHub at all, for anything).

Add all of your ssh public keys from your various HSMs (you should have one for each computer you type on) to your GitHub account. GitHub publishes everyone’s public keys at https://github.com/username.keys (here’s mine).

Then, on new systems, simply paste this line (substituting your own username, of course):

mkdir -p ~/.ssh
curl -sf https://github.com/sneak.keys > ~/.ssh/authorized_keys

You may be tempted to crontab this like I was, so that the keys on all of your machines are automatically updated on adds/removes to the master list. If you do so, you give anyone who controls the github.com domain the ability to add ssh keys to your machines automatically. You may or may not be okay with this—I am not, considering Microsoft (GitHub’s owner) is a giant military defense contractor and eager partner in the US military’s illegal bulk surveillance programs.

Note: If you do end up running it from cron, be sure to check the return value of curl before replacing the file (i.e. don’t use the line above unmodified), because then if the network is down when cron runs it will clobber your file and not refill it, rendering your authorized_keys file empty.

Personally, I like to have ssh keys that have access to GitHub (non HSM keys, such as on my Pixelbook which sadly doesn’t support Linux USB passthrough for the Yubikey smartcard) that don’t also have root on my machines, so I maintain a separate list on my own website:

https://sneak.cloud/authorized_keys

Then, on new boxes, I just paste the following:

mkdir .ssh
cd .ssh
mv authorized_keys authorized_keys.orig
wget https://sneak.cloud/authorized_keys

Use A Remote Docker

Docker desktop for mac is closed source software, which is dumb for something that asks for administrator permissions on your local machine. This lameness aside, it runs the docker daemon (not the command line client) inside a linux VM which it runs on your local machine, which is probably a relatively slow laptop on a not-great internet connection.

I have many big, fast computers on 1 or 10 gigabit connections that I can use via SSH that are better for building docker images or testing Dockerfiles (I do all of my editing and giting and suchlike on my local workstation, because my signing keys and editor config are all here).

cat > ~/Library/bashrc.d/999.remote-docker.sh <<'EOF'
alias ber1.docker="/bin/rm -f $TMPDIR/docker.sock; ssh -nNT -L $TMPDIR/docker.sock:/var/run/docker.sock root@ber1.example.com ; export DOCKER_HOST=unix://$TMPDIR/docker.sock"
EOF

The preceding uses ssh to forward a local unix socket (a type of special file) to a remote server (in this example, ber1.example.com), with which your local docker command can use to talk to a remote docker server (via locating the socket in the DOCKER_HOST environment variable). You’ll want to change the ber1.docker part to whatever you want the command to be to enable your remote docker, and the root@ber1.example.com part to the username and hostname of the remote machine you wish to use. (It needs to be running ssh and docker already.)

Once you run one of those aliases (they have to be aliases instead of scripts because they need to modify the environment of your existing, running shell) you should be able to use all normal docker commands (e.g. docker ps, docker push, docker build -t username/image /path/to/dir) just as if you were on the docker host itself. This makes it pretty simple to do a docker build . in a directory in which you’ve been hacking, but leveraging all of the power of a big, fast machine in a datacenter. You’ve never seen apt update in your Dockerfile go so fast.

Security warning: anyone who can read and write from the local socket on your workstation (probably just you, but worth mentioning) has root on the remote server, as API access to a remote docker daemon is equivalent, from a security and practical standpoint, to root on the docker host itself.

Update! Better yet! HN user 11235813213455 writes to say that you can simply set DOCKER_HOST to an ssh url starting in docker 18.09!

export DOCKER_HOST=ssh://root@remotehost.example.com
docker ps

or, to persist:

echo 'export DOCKER_HOST=ssh://root@remotehost.example.com' > ~/Library/bashrc.d/999.remote-docker.sh

Hacks Repo

I have a git repository called hacks into which I commit any non-secret code, scripts, snippets, or supporting tooling that isn’t big or important or generic enough to warrant its own repo. This is a good way to get all the little junk you work on up onto a website without creating a billion repositories.

Back Up Your IMAP Data Locally From Cloud Services

You might use gmail and access it with a mail client via IMAP. Use offlineimap to periodically back it up to files in a local maildir-format directory, so that if your Google account should evaporate through no fault of your own, you don’t lose decades of email. (Sync it to other machines via syncthing to avoid losing data via disk crash or hardware theft, or put it somewhere that your local workstation backups will pick it up.)

Back Up Your Local Workstation

I back up my local machine to a remote (encrypted disk) server via SSH using rsync via this script. It looks at the environment variable $BACKUPDEST to figure out where to do the backup.

For a remote ssh backup, do:

echo 'export BACKUPDEST="remoteuser@remotehost.example.com:/path/to/destination"' > ~/Library/bashrc.d/999.backup-destination.sh

For a local drive:

echo 'export BACKUPDEST="/Volumes/externaldisk/mybackup"' > ~/Library/bashrc.d/999.backup-destination.sh

Then use the above script. Copy it to your local machine and edit the backup exclusions as required for your use case.

If you trust other companies with your data and want something more user-friendly, check out BackBlaze, as they’re cheap and excellent and offer unlimited storage.

(I also use the macOS built-in backup called Time Machine to back up to an external USB drive periodically, but I don’t trust it. syncthing is my first-line defense against data loss, my rsync backups are my second, and the Time Machine backups are just a safety net.)

Makefile in Home Directory

I have a Makefile in my home directory (really just a symlink to ~/dev/hacks/homedir.makefile/Makefile; it officially lives in my hacks repository) that I use to store common tasks related to my local machine (many of which are somewhat out of date, I note now on review).

The one I use most often, though, is make clean, which takes everything in ~/Desktop and moves it into ~/Documents/$YYYYMM (creating the month directory in the process if it doesn’t exist), and also empties trashes. This alone is worth the price of admission to me.

git.eeqj.de/sneak/hacks/homedir.makefile/Makefile

I should prune it of old/outdated commands and update it for my current/latest backup configuration. In my ideal world, make in my home directory would empty trashes, clean up the desktop, download/mirror all my IMAP email to local directories, then run a full backup to a remote host.

Sync Development Directory

I have a symlink, ~/dev, in my home directory, that points to a subdirectory of my synced folder, ~/Documents/sync, into which I check out any code I’m working on. I rarely edit code outside of ~/dev. This way, even if I don’t remember to commit and push, my current working copies are synced shortly after save to several other machines. I wouldn’t lose much if you threw any of my machines in the river at any time.

Stupid SSH Tricks

Your ssh client config lives at ~/.ssh/config.

The user ssh client config file is amazing, and you should be using it extensively. ssh_config(5) has more info (run man 5 ssh_config).

Here’s the basic format:

Host specific.example.com
	SpecificHostnameParameter

Host *.example.com
	ExampleDotComParameter
	ExampleDotComParameter

Host *
	GlobalParameter
	GlobalParameter

In this way, you can specify new default settings for all ssh commands, and then override them on a specific wildcard/host basis.

e.g. to always ssh as root:

Note that (I’m told) ssh will read each setting in the order it is found in the file, without later items being allowed to override previous ones, so specify them from most-specific to most-generic (putting the Host * at the end), allowing host- or domain-specific items to come before your defaults.

SSH: Move Your SSH Config File Into a Synced Folder

In the following example, ~/Documents/sync is a synced directory that replicates automatically across all my workstations using syncthing. (You should use syncthing.) You could also use Google Drive or Dropbox if you want to give third parties that much control over your machine, or knowledge of your hostnames/habits.

mkdir -p ~/Documents/sync/dotfiles
mv ~/.ssh/config ~/Documents/sync/dotfiles/ssh_config
ln -s ~/Documents/sync/dotfiles/ssh_config ~/.ssh/config

On the other machines, just:

rm ~/.ssh/config
ln -s ~/Documents/sync/dotfiles/ssh_config ~/.ssh/config

Now, settings changes for ssh automatically propagate to all workstations.

You could do the same for your known_hosts file to sync host key fingerprints between all of your machines, too, but I don’t bother, as I find TOFU sufficient.

SSH: Faster Crypto

Put the following in your Host * section:

It’s my understanding that the counter mode is more efficient on modern, multicore CPUs, as it is easier to parallelize.

SSH: Persistent Connections

Put the following in your Host * section:

Host *
	ControlPath ~/.ssh/%C.sock
	ControlMaster auto
	ControlPersist 10m

Make sure you use %C (hash of username+hostname) as the filename token instead of %h (hostname) or whatever other stuff other tutorials on the internet told you, I ran into issues using the other format, whereas this uses just [a-f0-9] in the first part of the filename.

This will maintain a connection to each host you ssh into for 10 minutes after idle. Any future ssh connections while the first is open (or within that 10 minute window) will re-use the existing TCP connection, which speeds things up a lot.

Security notice: anyone who can write to these socket files (probably just you) has full access to the hosts to which they are connected.

SSH: Rewrite Hostnames

Have some machines that aren’t in DNS, or have stupid hostnames that you can’t remember? Using IPs is a terrible smell that you should always avoid. Rewrite them by overriding their connection hostname:

Host workbox.example.com
	HostName 2.3.4.5

Host otherbox
    Port 11022
    User ec2_user
	HostName real-hostname-is-long-and-dumb.clients.hostingprovider.su

Then, just ssh otherbox. Sure beats ssh -p 11022 ec2_user@real-hostname-is-long-and-dumb.clients.hostingprovider.su!

In this way, your ssh config file functions as a sort of local dns database.

SSH: ProxyCommand

You can use the ProxyCommand directive to tell ssh how to get i/o to a remote ssh service, skipping the whole direct TCP connection process entirely. You can use this for connecting transparently via a bastion host, e.g.:

Host *.internal.corplan
	ProxyCommand ssh user@bastionhost.example.com nc %h %p

Using the preceding will result in ssh box1.internal.corplan sshing as user to bastionhost.example.com and running netcat with nc box1.internal.corplan 22 (%h and %p are replaced with the destination host and port of the “main” ssh, i.e. the ones you typed or implied on the command line (box1).

If you don’t have a nice organized corporate naming scheme, or even DNS at all, you can hardcode the values:

# 2.3.4.5 is the bastion host
Host box2.internal.corplan
	ProxyCommand ssh myuser@2.3.4.5 nc 10.0.1.102 22

Host box3.internal.corplan
    Username appuser
	ProxyCommand ssh myuser@2.3.4.5 nc 10.0.1.103 22

Alternately, combine them:

Host bastion.corpext
    HostName 2.3.4.5
    User myuser

Host box2.internal.corplan
	ProxyCommand ssh bastion.corpext nc 10.0.1.102 22

Host box3.internal.corplan
	ProxyCommand ssh bastion.corpext nc 10.0.1.103 22

Finally, I used nc (netcat) to illustrate the example, but it turns out that the ssh command has the functionality built in (as -W), removing the need to have netcat installed (-T tells it not to allocate a pty):

Host box2.internal.corplan
	ProxyCommand ssh user@2.3.4.5 -T -W 10.0.1.10:22

The beauty of setting up key-based SSH and configuring your hosts in your ssh client config file is that then commands such as:

rsync -avP ./localdirectory/ otherhost:/path/to/dest/

..will “just work”, even if the machine is behind a bastion host, or needs a special SSH port, or a different username, or even if it’s accessed via tor. You no longer need to think about the specifics of each ssh host (other than the hostname), it just lives in your config file.

This also allows you to use the ssh/scp support in your local editor (vim does this, for example) to edit files on remote machines (in a local editor without keyboard lag) that might be a pain in the ass to ssh into due to being behind firewalls or bastion hosts, or on weird ports. Put the specifics in the config file, then it’s as simple as vim scp://internalbox1.example.com//etc/samba/smb.conf (two slashes between hostname and absolute path for vim’s scp support, mind you).

SSH: Easy Access With Tor

I like to install tor on boxes I administrate, and set up a hidden service running on them for ssh, because then I can ssh into them (albeit slowly) even if they have all inbound ports firewalled, or are behind NAT, or whatever—no port forwarding required.

Install tor on the server, add the following two lines to /etc/tor/torrc, restart tor, and now you have a hidden service address for that system:

apt update && apt install -y tor
cat >> /etc/tor/torrc <<EOF
HiddenServiceDir /var/lib/tor/my-ssh/
HiddenServicePort 22 127.0.0.1:22
EOF
service tor restart
cat /var/lib/tor/my-ssh/hostname

If you don’t want the ssh service to be reachable even from the lan/wan (only via the hidden service), add a ListenAddress 127.0.0.1 to /etc/ssh/sshd_config and bounce sshd.

For the following to work, you have to be running tor on your local machine too (which provides a SOCKS5 proxy at 127.0.0.1:9050).

Two parts are required:

Host *.tor
     ProxyCommand nc -x 127.0.0.1:9050 %h %p

Then, for each host:

Host zeus.tor
    User myuser
    Hostname ldlktrhkwerjhtkexample.onion

Then, I can just ssh zeus.tor and it will match the ProxyCommand in *.tor to use netcat to talk to the local SOCKS proxy provided by the tor daemon on localhost to connect to the .onion, and then it will pick up the .onion hostname and username to use for that specific box from the full host string match zeus.tor.

I actually use my ssh config file as my master record of the onion hostnames of my machines. (This is one reason why syncing with syncthing is vastly preferred to using a file syncing service that gives third parties access to your files. I would prefer that nobody know what hidden services I am interested in or are associated with me, for privacy’s sake.)

SSH: Forward Public Port To Local Server

Ever want to access a machine behind several NATs/firewalls from a publicly-acessible port? You can use this for access to SSH, or any other service running on the ssh client machine, like a development webserver. First, set up unattended key authentication between the target machine (behind the firewall) to the public machine that is reachable, probably by generating an ssh key without a password on the target machine and creating an unprivileged user on the public machine and adding that public key to ~/.ssh/authorized_keys for that unprivileged user. The public machine also needs to have GatewayPorts yes in its /etc/ssh/sshd_config (which is not the default), so it requires a little configuration change to get working.

Then, set the following command to run continuously (via runit or systemd):

ssh -T -R 0.0.0.0:11022:127.0.0.1:22 user@publichost

In the above example, this would make port 11022 on the public host (externally available) connect to the target machine’s ssh port (22) as long as the ssh command is running. This is a quick hack to publish a service behind firewalls or on your local workstation accessible, along the lines of what ngrok does, but by re-using a server you already have.

To expose a local development webserver:

ssh -T -R 0.0.0.0:80:127.0.0.1:8080 root@publichost

(You need to use root to bind to ports under 1024, such as 80.)

Nicholas at hackernotes.io has more information on the technique, including a way to restrict this type of remote port binding to a specific user.

See Also

Thanks to HN user roryrjb, Peter Fischer, and James Abbatiello for all submitting bug reports for this post, all of which have been incorporated in small edits. This just once again illustrates that the best way to get a list of thorough errors and required corrections is to speak authoritatively about something in public. :D

Thanks also to @darkcube, @salcedo, and @someara for prerelease proofreading.

Feedback?

Think I’m right? Think I’m totally wrong? Find a bug in a snippet? Complaints, suggestions, and praise to sneak@sneak.berlin.


Read the original article

Comments

  • By raimue 2019-10-1712:543 reply

    Regarding the point of "SSH: Faster Crypto", you should not enforce only one single specific cipher for ssh. The reason also seems wrong, modern hardware should be capable of achieving better performance with an AEAD cipher (such as AES-GCM or ChaCha20-Poly1305) instead of AES-CTR, as the latter also requires an additional HMAC.

    If there really is a slow (or insecure) cipher you do not want to use, remove it by prepending a minus sign, for example `Ciphers -3des-cbc`, which keeps all other default ciphers. Otherwise, you will miss out on better ciphers as they are added and would be stuck on this one forever.

    • By xoa 2019-10-1719:072 reply

      >Otherwise, you will miss out on better ciphers as they are added and would be stuck on this one forever.

      Is that actually a real concern at this point vs the risk that comes from not white listing a few known reliable ones? It seems like old-style security, favoring modularity and "what if we need to change this someday soon?", but in practice it's turned out to be a lot less valuable than expected and raise significant risks of accidentally using something bad. We seem to be past the point where new ciphers being "better" is actually an event to be expected with any real frequency, prime/elliptic curve seems pretty mature now. Post-quantum could be a new frontier at some point, but may well require significant other changes as well.

      WireGuard for example decided to just flat out say that any cipher changes will be tightly coupled with a full on version change. If you're using it you know exactly what you're getting and that's it.

      I don't disagree that white listing means care with what is chosen, and picking AEAD-based seems like a better idea anyway (WG is Curve25519/ChaCha20-Poly1305/SipHash/BLAKE2s). Plus there is server compatibility to consider in some cases. But I'm not sure the logic of "you might miss out on better ciphers as they are added" is convincing either, vs setting yourself an alarm to recheck your SSH setup every year or two. Shouldn't any cipher additions/deletions arguably be something you actively consider, rather than have automagically added?

      • By paulddraper 2019-10-1723:031 reply

        > what if we need to change this someday soon?

        Git was created in 2005 and its hash algorithm is already outdated.

        Additionally, software and hardware support continue to develop for better performance.

        • By CodesInChaos 2019-10-1816:531 reply

          SHA-1 was already broken (Feb 2005) when git was first published (April 2005). But Linus decided that git doesn't need a collision resistant hash function. https://marc.info/?l=git&m=115678778717621

          • By paulddraper 2019-10-1818:011 reply

            SHA-1 was not broken until 2017.

            http://shattered.io/

            If you know an earlier instance, go ahead and take the crown from the shattered folks.

            ---

            The choice to use SHA-1 was a trade-off of security, size, performance. If Linux invented git today, I imagine the choice would have been different, because those parameters are now different.

            • By CodesInChaos 2019-10-197:32

              In cryptography broken means "known attack significantly faster that brute-force", which was published in 2005. And cryptographers were advocating for deprecating it several years before that, because the security margin was clearly insufficient. https://www.schneier.com/blog/archives/2005/02/sha1_broken.h...

              The time between a theoretical attack and practical demonstration of an attack should be considered a grace period we can use to migrate to a secure primitive. Choosing SHA-1 for an application which relies on collision resistance after the 2005 papers is plain incompetence.

              Git chose SHA-1 because Linus did not consider collisions a problem. The downsides of SHA-256 were pretty small even then (32 instead of 20 bytes, and somewhat slower performance which is still faster than most IO).

      • By nine_k 2019-10-180:21

        You never know when a particular cipher will be cracked, or its implementation found to have a serious flaw.

        At moments like that, you want an easy and fast way to disable such a cipher, but stay interoperable otherwise.

    • By bhauer 2019-10-1716:132 reply

      > Otherwise, you will miss out on better ciphers as they are added and would be stuck on this one forever.

      That sounds like good advice; thank you. Out of curiosity, as someone who is fairly noobish on SSH, are "better ciphers" typically automatically preferred by SSH clients and servers as they are introduced? In other words, do the SSH implementations maintain a rank ordering that prefers "better" ciphers? That would be my expectation, but it seems I am often surprised by the bad defaults when dealing with security.

      • By TrueDuality 2019-10-1717:201 reply

        Generally yes (there are a lot of SSH implementations out there), but that isn't the only thing you want to protect against:

        1. If there is a critically broken cipher an attacker that can perform a MiTM attack and claim it only supports the broken cipher between both ends which can force an association using that and thus break your crypto transparently.

        This type of attack would be high effort and targeted. Most threat models don't really need to address this issue, but disabling ciphers is so easy you mind as well spend a couple of keystrokes doing it.

        2. If the cipher implementation is broken (think OpenSSL's heartbleed) then leaving the cipher available opens you up to being directly attacked by botnets.

        This type of attack has a high initial cost for the attacker (developing the exploit) but can be sprayed across the entire internet. This is the type of attack that would affect most people and should be protected against by patching and disabling known bad ciphers.

      • By aasasd 2019-10-1717:27

        I wonder if this issue is prone to cases of ‘a sysadmin writing a verbose config,’ like with TLS servers where ciphers are often put in a whitelist in my experience.

    • By nominated1 2019-10-1717:34

      > Otherwise, you will miss out on better ciphers as they are added and would be stuck on this one forever.

      You’ve identified issues with whitelisting but blacklisting isn’t perfect either. For example, many don’t trust NIST and may want to prevent the use of any of their future curves. Blacklisting fails here.

      When updating to a newer version of SSH I think it’s good practice to ‘man ssh_config’ and at least look at KexAlgorithms, HostKeyAlgorithms and Ciphers.

  • By asaph 2019-10-1714:587 reply

    > Don’t try to install things with brew if brew is not installed:

       if which brew 2>&1 /dev/null ; then
            brew install jq
       fi
    
    This just hides a useful error message (brew not installed). I would rather just see that error message (either interactively or in a log) and have the script fail. Hiding the error message just leads to an eventual failure down the road when jq is invoked.

    • By reacweb 2019-10-1715:243 reply

      In bash, there is the build-in command named "command". I have used it for the same purpose, like in:

          command -v brew >/dev/null && brew install jq

      • By temo4ka 2019-10-1718:30

        If you don't care for maximum POSIX compatibility, i.e. your script is bash-specific, it's better to use "hash", which is going to ignore aliases (but not functions) and also has the benefit of caching the command for further use.

          hash foo &>/dev/null || { echo "foo command not found blabla..." >&2; exit 1; } 
        
        
        more info: https://stackoverflow.com/questions/592620/how-to-check-if-a...

      • By raldi 2019-10-1719:033 reply

        Wow. It's impossible to google, and on MacOS, `man command` is useless.

        • By s_gourichon 2019-10-1719:101 reply

          $ help command

          command: command [-pVv] command [arg ...] Execute a simple command or display information about commands.

              Runs COMMAND with ARGS suppressing  shell function lookup, or display
              information about the specified COMMANDs.  Can be used to invoke commands
              on disk when a function with the same name exists.
              
              Options:
                -p    use a default value for PATH that is guaranteed to find all of
                      the standard utilities
                -v    print a description of COMMAND similar to the `type' builtin
                -V    print a more verbose description of each COMMAND
              
              Exit Status:
              Returns exit status of COMMAND, or failure if COMMAND is not found.

          • By m463 2019-10-182:152 reply

            Thank you.

            I never knew about help to describe a builtin shell command. normally searching for something like "command" in the bash man page would be very tedious.

            also: help echo

            • By dTal 2019-10-1911:50

              How little we expect from our computers. "help <command>" should be the first thing you expect to work.

            • By imihai1988 2019-10-187:542 reply

              i do think 'man command' should bring up the specific man page for this, so there is no need to search into the bash man. this works with most builtin shell commands. e.g. 'man ls' is a thing

              • By stragies 2019-10-188:571 reply

                On (at least) Debian systems (, and probably many more) `ls` is not a built-in, but an external binary (`/usr/bin/ls`) is used.

                `which ls` and `help ls` will show you, if it is similar on your system.

                • By imihai1988 2019-10-1816:19

                  It does't need to be a built-in to have a man page (many first party and third party libraries come with their own man pages).

              • By m463 2019-10-1819:461 reply

                "man command" on linux says "no manual entry for command"

                "man echo" on linux describes /usr/bin/echo "help echo" describes the builtin.

                on the othe hand, "man command" on mac os x gives a huge manpage of builtins (still hard to search for the common word "command")

        • By gravitas 2019-10-1721:40

          `command` is a bash builtin, not an external program. The information is in the bash man page (which is hard to find in there because the word "command" is in the man page 7 bajillion times. Look in the major section "SHELL BUILTIN COMMANDS" where you find `cd` and others like it).

        • By jplayer01 2019-10-188:33

          You can determine if something is a bash built-in by using 'which command'. If it's built-in, man won't work.

      • By Fnoord 2019-10-189:191 reply

        In config.fish I use type for this purpose, together with the exit code

          test (type brew 2>/dev/null); and brew # blahblah

        • By lloeki 2019-10-199:011 reply

          type -p also works in bash/zsh

          • By Fnoord 2019-10-199:44

            Fish as well, but the redirection of both stdout and stderr is different:

              type -p brew >/dev/null ^&1; and brew
            
            in Bash it would be:

              type -p brew &>/dev/null && brew
            
            Using test avoids caring about stdout/stderr.

    • By aasasd 2019-10-1715:531 reply

      You don't need to check for `brew` in each task if they're run automatically. Just have one task that checks for `brew`.

      With a proper dependency graph, the tasks installing things would depend on the task `brew-installed`.

      This, of course, is leaving alone the point that installing stuff on startup is weird, doubly so with brew which is pretty slow.

      • By aasasd 2019-10-185:29

        Correction, to clarify: you need to check for `brew` in each task but not warn. One master warning is enough.

    • By jolmg 2019-10-1715:371 reply

      It seems you miswrote, or the author corrected himself. Right now it's:

        if which brew >/dev/null 2>&1 ; then
            brew install jq
        fi
      
      Which would hide the error message. The code you posted outputs:

        brew not found
        /dev/null not found

      • By lmm 2019-10-1716:122 reply

        It's impractical to determine whether Hacker New's undocumented formatting language is going to eat any given angle bracket ahead of time. I suspect OP wrote something correct and the site has mangled it.

        • By asaph 2019-10-181:56

          It got messed up when I was trying to format it, unfortunately. But my point still stands.

    • By majewsky 2019-10-1715:151 reply

      I think this snippet needs some context. In all contexts that I can imagine, the first line should actually be

        if which jq &>/dev/null; then
          brew install jq
        fi

      • By playpause 2019-10-1715:232 reply

        That seems more reasonable, but it's not what the author wrote. He precedes the snippet with "Don’t try to install things with brew if brew is not installed", so his intention does seem to be to swallow errors silently, which is definitely weird.

        • By pletnes 2019-10-1718:411 reply

          He might want his .bashrc to work on both macos and linux.

          • By 00N8 2019-10-182:311 reply

            IME .bashrc already doesn't work the same on OS X, though, so that's still a weird reason.

            • By zeroimpl 2019-10-182:50

              Likely due to using the older version of bash preinstalled on Mac. Installing a new one and it should be virtually identical.

  • By roryrjb 2019-10-1712:569 reply

    Before we begin, first note that bashrc refers to something that runs in each and every new shell, and profile refers to something that runs only in interactive shells (used by a user at a keyboard, not just a shell script, for example). They aren’t the same and you don’t want them to be the same.

    I'm not an expert on bash exactly though I am a heavy shell user (POSIX shell for scripting) but this part doesn't sound right. When is .bashrc ever executed when bash isn't interactive? And as far as I can tell when I open new shells .profile isn't read. I am using Linux and tmux and the reason I mention tmux is that it opens bash as a login shell and therefore .bash_profile is also loaded. Is this a mac OS thing or the version of bash mac OS comes with, which I believe is really old due to some license issues.

    • By skywhopper 2019-10-1714:46

      Which startup files are read by bash and other shells in which state is very inconsistent, even across distributions of Linux. I've collapsed all of mine into .bashrc and simply source that file from the other possibilities. And on the rare occasion that I care about interactive vs not, I can make that distinction explicitly in the code.

    • By russjones 2019-10-1718:071 reply

      I ran an eBPF program called opensnoop [1] to capture what files were opened during login to a system and then re-launching bash. Looks like both are read during initial login but only .bashrc for non-login shells. Output is below.

        24435  bash                3   0 /etc/profile
        24435  bash                3   0 /etc/profile.d/
        24435  bash                3   0 /etc/profile.d/256term.sh
        24435  bash                3   0 /etc/profile.d/colorgrep.sh
        24435  bash                3   0 /etc/profile.d/colorls.sh
        24435  bash                3   0 /etc/profile.d/lang.sh
        24435  bash                3   0 /etc/profile.d/less.sh
        24435  bash                3   0 /etc/profile.d/which2.sh
        24435  bash                3   0 /etc/profile.d/sh.local
        24435  bash                3   0 /home/centos/.bash_profile
        24435  bash                3   0 /home/centos/.bashrc
        24435  bash                3   0 /etc/bashrc
      
        24736  bash                3   0 /home/centos/.bashrc
        24736  bash                3   0 /etc/bashrc
        24736  bash                3   0 /etc/profile.d/
        24736  bash                3   0 /etc/profile.d/256term.sh
        24736  bash                3   0 /etc/profile.d/colorgrep.sh
        24736  bash                3   0 /etc/profile.d/colorls.sh
        24736  bash                3   0 /etc/profile.d/lang.sh
        24736  bash                3   0 /etc/profile.d/less.sh
        24736  bash                3   0 /etc/profile.d/which2.sh
      
      
      [1] http://www.brendangregg.com/blog/2014-07-25/opensnoop-for-li...

      • By remram 2019-10-1817:17

        Be careful that some of those files explicitly include others. For example my (default) ~/.profile includes ~/.bashrc, my ~/.bash_profile includes ~/.profile, /etc/profile includes /etc/bash.bashrc...

        So your capture here doesn't show only the files that bash itself decided to load. You also won't see the fallback files (e.g. bash will open .profile if .bash_profile doesn't exist).

    • By tremon 2019-10-1713:222 reply

      as far as I can tell when I open new shells .profile isn't read

      True, iff you have a .bash_profile. Bash only reads the first of ~/.bash_profile, ~/.bash_login, ~/.profile. It will ignore the rest of the list once it's found an existing file.

      • By jolmg 2019-10-1715:431 reply

        Still, they're not read with every new interactive shell. They're read only with login shells. From the manual:

        > When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login,

        > When an interactive shell that is not a login shell is started, bash reads and executes commands from ~/.bashrc

        So, this is wrong:

        > bashrc refers to something that runs in each and every new shell

        Because .bashrc doesn't run when you execute a shell script, and doesn't run when you use `bash -c`.

        And this is wrong:

        > profile refers to something that runs only in interactive shells

        Because it does run in non-interactive shells when they're login shells, and because it implies that it runs in every interactive shell, which it doesn't. It only runs in login shells.

        • By lloeki 2019-10-199:05

          Insanely enough, this is set up wrong in Debian and derivatives, where bash_profile sources bashrc as a default from /etc/skel, leading to damning "not a tty" messages because at some point it calls some tty requiring command unconditionally.

      • By ashtonbaker 2019-10-1713:502 reply

        This explains so many of my problems. Thank you.

    • By s_gourichon 2019-10-1719:19

      There's a graph (generated by graphviz from text description) that shows the flow and files involved for bash, sh and zsh. Yes, it's insane.

      https://blog.flowblok.id.au/2013-02/shell-startup-scripts.ht...

    • By rovr138 2019-10-1713:052 reply

      I thought macOS always loaded the shell as a login shell by default while most Linux and UNIX load a non-login shell by default.

      On a Mac you usually end up adding

          if [ -f ~/.bashrc ]; then
            . ~/.bashrc;
          fi 
      
      To your .profile or .bash_profile.

      Just read that part and now I can’t help but question the rest of it before even reading it

      • By _mdpn 2019-10-1713:241 reply

        I just moved to Catalina, and the fact that zsh has a clear and reasonable standard for startup files made up for the fact that I had to move my shell init to its files.

      • By Fnoord 2019-10-189:281 reply

        If you do, it will even load it for non-interactive shells.

          if [ -n "$PS1" ] ; then
            [ -r ~/.bashrc     ] && . ~/.bashrc
            [ -r ~/.bash_login ] && . ~/.bash_login
          fi

        • By rovr138 2019-10-191:08

          Yep. It’s a mess frankly.

    • By Aloha 2019-10-1713:41

      It sounds right to me, when running thru cron for example, I need to load env vars as part of my script startup otherwise even path is missing.

    • By techslave 2019-10-1714:27

      > I'm not an expert on bash exactly though I am a heavy shell user (POSIX shell for scripting) but this part doesn't sound right. When is .bashrc ever executed when bash isn't interactive?

      Never.

      TFA has got it wrong.

    • By robohoe 2019-10-1713:012 reply

      .bashrc would get executed when you run something in cron. Cron runs non-interactive shell sessions.

      • By roryrjb 2019-10-1713:111 reply

        I tested this and this is not the case. My .bashrc does a lot of things such as custom completions that are expensive that I would not want run by anything else other than an interactive shell. It also has return at the end so if anything I don't know about adds stuff to it, it won't get executed. This would break other scripts loading .bashrc as well. So as far as I can see this is false.

        • By tremon 2019-10-1713:291 reply

          From the manual:

          When an interactive shell that is not a login shell is started, bash reads and executes commands from /etc/bash.bashrc and ~/.bashrc, if these files exist. This may be inhibited by using the --norc option.

          So interactive, non-login shells only.

          But also:

          When invoked as an interactive shell with the name sh , bash looks for the variable ENV, expands its value if it is defined, and uses the expanded value as the name of a file to read and execute. [A] shell invoked as sh does not attempt to read and execute commands from any other startup files

          So only when invoked as bash, not as sh.

          • By roryrjb 2019-10-1713:31

            Thank you, very helpful. That explains why I have to have my .bash_profile source .bashrc for use with tmux, as it executes bash as a login shell. I knew I had to have this but wasn't 100% on why until now. Note to self, RTFM!

    • By hn_new 2019-10-1723:16

      scp executes .bashrc if I recall correctly. Any echo in .bashrc broke scp for me in the past.

HackerNews