I remove a snapd by sudo eopkg rmf --purge snapd
just because I don't use it and don't want to use it at this moment.
But after removing a snapd and rebooting, the /dev/loop0 squashfs device with 89.1M of size and /snap/core/8039 mountpoint still exist and mounted.
How can I safely unmount and remove it?
Correctly remove the /dev/loop0 squashfs created by snap
- Edited
Tried to install snapd again and after a command sudo snap remove core --revision 8039
I checked list of installed snaps:
$ sudo snap list --all
Name Version Rev Tracking Publisher Notes
core 16-2.45 9289 stable canonical✓ core
Wow. Amazing. Wonderful.
Tried to remove this core thing by $ snap remove core --revision 9289
error: cannot remove "core": cannot remove active revision 9289 of snap "core"
I think sudo can delete that, but snapd create a new one core squashfs device again.
So I just removed a snapd by sudo eopkg rmf --purge snapd
, unmounted a /dev/loop device by sudo umount /dev/loop1
and delete a /snap by sudo rm -rf /snap
Problem seems solved.
P.S: Does eopkg --purge
option really work? It seems it doesn't do anything.
From man eopkg:Remove files tagged as configuration files too.
This primarily applies to any files in /etc/.
I assumed that option can handle this type of issues (e.g for snapd). But it's not.
was going to suggest to find snap's rundeps, yesterday, but had a feeling you would..
there is eopkg dc and eopkg rmo if you want to be sure about the remnants (or any other remnants).
if you are not seeing that squash loop anymore than I'd have to say success.
as to --purges's specific effectiveness, I cannot answer.
Oh, I had the same problem with loop devices but don't remember how I manage to remove them. It was a long process...
brent Yeah, I know about dc
and rmo
options existing, but...
dc is just remove a cache files still helds by eopkg for downloads and packages files.
rmo is just remove a packages that automatically installed and no longer have any dependency with packages installed on the system.
Neither of these two options cleaning a config files or something like that (e.g for snapd - removing a orphan loop device). --purge will do, but his behavior isn't that clear.
Solarmass I assume it require a manual intervention.
Far too much effort to remove versus just leaving it there. It's not going to harm anyone.
Hmmm...
After reboot, the /dev/loop device create again. /snap directory create again too.
Even after removing a snapd and /snap directory by rm -rf.
Weird.
Trying to remove squashfs-tools and manually remove any files related to snapd, maybe it will help...
- Edited
Finally, I did it.
The article in russian ArchWiki describe a snapd removing very detail.
So, for anyone who want delete a snapd from system completely leaving a manual:
- Check list of installed snaps:
sudo snap list --all
- Remove all of installed snaps:
sudo snap remove snapname
(for core snap also use a --revision revision_number option) - Remove a snapd by
sudo eopkg rmf snapd
- List all /dev/loop devices currently mounted by snap:
sudo mount | grep snap | awk '{print $3}'
- Unmount this devices by
sudo umount device_name
- Remove snap directory:
sudo rm -rf /var/lib/snap && sudo rm -rf /snap
- Remove all files that used to mount a snap packages from /var/lib/snapd/snaps to /snap on the boot:
sudo find /etc/systemd/system -name "snap-*.mount" -delete
sudo find /etc/systemd/system -name "snap.*.service" -delete
sudo find /etc/systemd/system/multi-user.target.wants -name "snap-*.mount" -delete
sudo find /etc/systemd/system/multi-user.target.wants -name "snap.*.service" -delete
- Reboot, and that's all. You're perfect.
I don't know what system configuration caused you to have this much trouble getting rid of snap completely (as to why, I've always had troubles with permissions on snaps, but flatpaks work pretty much painlessly ). On a fresh install, I simply used
sudo eopkg rmf snapd
and it was enough to remove snap permanently from my system (no reappearing /snap
directory or /dev/loop
device). Maybe once some snaps (e.g. core) are downloaded snap itself becomes entangled with the system? I'm puzzled here...
- Edited
nathanpainchaud The problem on my system with reappearing /dev/loop device and /snap directory after reboot was caused because I forgot remove a /var/lib/snap after removing a /snap. After reboot, the snap's systemd services just restore a core snap from /var/lib/snap on /snap.
But... I'm really puzzled here too, why sudo eopkg rmf snapd
doesn't remove snapd properly even with --purge
option if it does it on fresh install. I can assume that on fresh install no one snap is loaded (even core), but if you install some snap, delete it by sudo snap remove
- the core snap will be still exist (if you try delete it - snapd create a new one) and you can only manually delete it after stopping snapd and removing a snapd by eopkg.
It can be funny, but I find a thread about same problem on SolusProject Reddit.
The Linuxllc user suggest to use this script https://github.com/zyga/devtools/blob/master/reset-state , which do the same things that I did manually.
nathanpainchaud Maybe once some snaps (e.g. core) are downloaded snap itself becomes entangled with the system?
Hmm, I don't think so, because after removing snapd by eopkg any related with snap daemons are not exist on system.
Dentraq nathanpainchaud Maybe once some snaps (e.g. core) are downloaded snap itself becomes entangled with the system?
Hmm, I don't think so, because after removing snapd by eopkg any related with snap daemons are not exist on system.
I didn't express myself quite as extensively and as eloquently as you did earlier in your reply, but I meant essentially the same thing. It seems that as soon as a snap is installed (requiring the core
snap to be installed), then the whole thing becomes much harder to properly remove, because core
is always being restored.
Dentraq Thanks so much mate.
It was bugging me also how many mounted loops were tentacled in.
Followed your guide which worked perfectly.
Here is what my lsblk now looks like:
peter@tadhg ~ $ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 4.6T 0 disk
└─sda1 8:1 0 4.6T 0 part /home
sr0 11:0 1 1024M 0 rom
nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 1G 0 part
├─nvme0n1p2 259:2 0 16G 0 part [SWAP]
└─nvme0n1p3 259:3 0 914.5G 0 part /