Far too much effort to remove versus just leaving it there. It's not going to harm anyone.
Correctly remove the /dev/loop0 squashfs created by snap
Hmmm...
After reboot, the /dev/loop device create again. /snap directory create again too.
Even after removing a snapd and /snap directory by rm -rf.
Weird.
Trying to remove squashfs-tools and manually remove any files related to snapd, maybe it will help...
- Edited
Finally, I did it.
The article in russian ArchWiki describe a snapd removing very detail.
So, for anyone who want delete a snapd from system completely leaving a manual:
- Check list of installed snaps:
sudo snap list --all
- Remove all of installed snaps:
sudo snap remove snapname
(for core snap also use a --revision revision_number option) - Remove a snapd by
sudo eopkg rmf snapd
- List all /dev/loop devices currently mounted by snap:
sudo mount | grep snap | awk '{print $3}'
- Unmount this devices by
sudo umount device_name
- Remove snap directory:
sudo rm -rf /var/lib/snap && sudo rm -rf /snap
- Remove all files that used to mount a snap packages from /var/lib/snapd/snaps to /snap on the boot:
sudo find /etc/systemd/system -name "snap-*.mount" -delete
sudo find /etc/systemd/system -name "snap.*.service" -delete
sudo find /etc/systemd/system/multi-user.target.wants -name "snap-*.mount" -delete
sudo find /etc/systemd/system/multi-user.target.wants -name "snap.*.service" -delete
- Reboot, and that's all. You're perfect.
I don't know what system configuration caused you to have this much trouble getting rid of snap completely (as to why, I've always had troubles with permissions on snaps, but flatpaks work pretty much painlessly ). On a fresh install, I simply used
sudo eopkg rmf snapd
and it was enough to remove snap permanently from my system (no reappearing /snap
directory or /dev/loop
device). Maybe once some snaps (e.g. core) are downloaded snap itself becomes entangled with the system? I'm puzzled here...
- Edited
nathanpainchaud The problem on my system with reappearing /dev/loop device and /snap directory after reboot was caused because I forgot remove a /var/lib/snap after removing a /snap. After reboot, the snap's systemd services just restore a core snap from /var/lib/snap on /snap.
But... I'm really puzzled here too, why sudo eopkg rmf snapd
doesn't remove snapd properly even with --purge
option if it does it on fresh install. I can assume that on fresh install no one snap is loaded (even core), but if you install some snap, delete it by sudo snap remove
- the core snap will be still exist (if you try delete it - snapd create a new one) and you can only manually delete it after stopping snapd and removing a snapd by eopkg.
It can be funny, but I find a thread about same problem on SolusProject Reddit.
The Linuxllc user suggest to use this script https://github.com/zyga/devtools/blob/master/reset-state , which do the same things that I did manually.
nathanpainchaud Maybe once some snaps (e.g. core) are downloaded snap itself becomes entangled with the system?
Hmm, I don't think so, because after removing snapd by eopkg any related with snap daemons are not exist on system.
Dentraq nathanpainchaud Maybe once some snaps (e.g. core) are downloaded snap itself becomes entangled with the system?
Hmm, I don't think so, because after removing snapd by eopkg any related with snap daemons are not exist on system.
I didn't express myself quite as extensively and as eloquently as you did earlier in your reply, but I meant essentially the same thing. It seems that as soon as a snap is installed (requiring the core
snap to be installed), then the whole thing becomes much harder to properly remove, because core
is always being restored.
Dentraq Thanks so much mate.
It was bugging me also how many mounted loops were tentacled in.
Followed your guide which worked perfectly.
Here is what my lsblk now looks like:
peter@tadhg ~ $ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 4.6T 0 disk
└─sda1 8:1 0 4.6T 0 part /home
sr0 11:0 1 1024M 0 rom
nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 1G 0 part
├─nvme0n1p2 259:2 0 16G 0 part [SWAP]
└─nvme0n1p3 259:3 0 914.5G 0 part /