Sunday, 1 April 2018

[2013] Toronto: Strange Behaviour of Future Shop Salesperson Explained

(I am sharing this now as I thought I already published it.  Turns out all I did was email it to the FSF.  So, finally, here it is)

I was at a Toronto Future Shop outlet looking for a netbook that was on sale, in the lower price range.  The salesman who was helping me said that a tablet might be cheaper then the netbook, if I was after the lowest cost computer.

I told him that I wasn't sure if I could get Linux onto the tablet, so I would hold off for now.  All of a sudden, the salesman went cold and told me that he wouldn't talk with me anymore if that was what I was going to do.

Stunned, I asked him why he would say that and risk losing a sale, he fudged around a bit, then offered a story where another salesman had sold a laptop, the user failed to install Linux on it and then was angry because the store wouldn't accept the return with an erased windows partition.

So I'm thinking, that's a BS answer, all he has to do is tell me that it voids the warranty and make a sale, but that he would threaten to stop talking with me because of my personal OS choice, well that's extremely rude!  And he'd lose a sale to Future Shop just because of I said the word "Linux" with him?

Now I'm rather upset, I demanded to speak with the store manager, and the three of us had a discussion about this on the sales floor.  The manager confirmed that it would be rude and not per Future Shop's policy to outright refuse to talk to a customer because they prefer Linux. I offered a better way to handle Linux inquiries.  As long as Future Shop salespeople tell the Linux users that it voids the warranty and complicates the returns process if the user wipes out Windows, that properly informs the customer and is probably already in the fine print somewhere.  Salespersons at Future Shop aren't liable for what a customer does with their computer to void the warranty.

There is NO reason to just stop talking with a customer, and basically walk away from a sale!  Linux users are just as valuable customers as anybody else, and the way that salesman acted completely devalued me as a customer.

Finally after the chat with the manager, I went to get the netbook off the shelf.  As I was on my way to the cashier, the salesperson in question finally told me the truth: he used to work as an Apple technician at an Apple store.  His rude behavior came from Apple's fascist policy about user's rights and he was making Future Shop look bad by carrying forward that terrible attitude.

Hopefully he realized he doesn't have to treat Linux-wielding customers like crap at Future Shop.

Saturday, 14 October 2017

What if I Want to Mirror End-of-Life CentOS?

I wanted to make a local mirror for some personal systems running CentOS 5.  It is now EOL, and I didn't mirror it before that happened.  It is easy to mirror an EOL release from Debian or Ubuntu, but CentOS doesn't make it easy.  In fact it seems impossible, but I don't know if that was the intention.

The CentOS team keeps EOL releases at vault.centos.org.  But unfortunately, that is the same server that worldwide mirrors connect to for current releases.  And according to https://wiki.centos.org/HowTos/CreatePublicMirrors, you cannot sync from that server because they have restricted access to only the known public mirror IP addresses.

Well, fuck.  I can't use vault.centos.org and I can't use any of the worldwide mirrors - because they don't keep EOL versions.  So, for anyone wanting to create their own mirror, it's up to luck.  I guess today I was lucky.  I found a site that keeps older releases AND which lets me rsync (there are only a few and most do not support rsync access).  It is at kartolo.sby.datautama.net.id/Centos/.

Well I am thankful for the existence of a 3rd-party mirror, but it really bugs me that CentOS has locked people out of mirroring their own official source for EOL releases.  I might complain if I feel like it.  But if they don't fix this bug, I shall make sure to grab the next current release before it goes EOL.

Thursday, 5 October 2017

Checkinstall is Unmaintained and Broken - How Could That Happen?

While I have been a Linux user for a number of years, I have been slow to adopt good policy regarding the management of a Linux system.  When it comes to installing software, I rarely ventured beyond the use of my distribution's repositories.  When I did, I would almost always find that the procedure was to compile from source using the traditional "./configure", "make" and "make install" commands.

During the past few weeks, I have been installing the latest version of a program that isn't in the repositories.  It is available as source code, and as usual, is compiled using the above three commands.  Only I became more aware than I did in previous years about the disadvantages of that method.  Primarily, there is no guaranteed way to uninstall the software, as various files are distributed all over the filesystem without keeping any records of what went where.

I remembered the first time I read about this disadvantage on a Mageia Wiki page.  It taught me that not only do you have the difficulty of uninstalling, you also risk making your system unstable.  From that wiki, I remember clearly their advice:

"The golden rule is, never bypass the rpm package database, if you can possibly help it..."
 It offered a way to install from source that was more sane: use Checkinstall instead of "make install" and it will not only keep track of the files installed, it will enter them into the RPM or DEB database and produce a package for you to install anytime in the future without having to recompile.

My first systems on which I used Checkinstall were two Ubuntu 14.04 systems.  It was in the repositories and I used it without error.  Then I went to a Debian 8 system, and installed my program there using Checkinstall as well.  It was great.  I found a sane way to install any program from source code without losing track of which files went where, and I also avoided any potential instability of my systems.

So then I went over to another system that was running CentOS 7, and found that Checkinstall wasn't available in the repositories.  This was strange, I thought.  So I searched https://pkgs.org for any third party repositories that might have it.  I could only find an RPM package for the old CentOS 5.  Okay fine, I said, I will just have to install Checkinstall from source.

I found the home page of the Checkinstall project, and I was happy to find after reading the documentation that there was a way to use Checkinstall on itself after installing so that it would be entered into the package management database.

But then I saw that the last release was in 2016, and there was no news or updates for this year.  That was worrisome to me.  Anyway, I went ahead and downloaded the source archive and proceeded with the unpacking and preparation steps to configure and make on my CentOS system.  Only it didn't make.  It threw a ton of errors which I am not skilled enough to solve.

So I searched for a possible solution to get Checkinstall working on CentOS 7, only to find more and more bad news.

Checkinstall uses a utility called "Installwatch", which, according to Wikipedia, has not been functioning completely right for the past 10 years.

After learning all this,  I gave up trying to find a way to get Checkinstall to work on my CentOS 7 system and just ran the untidy "make install".  But I was really pissed-off.  Why is it that a piece of software like Checkinstall, which seems to be the most sensible way to achieve and maintain a sane and stable Linux system when installing from source, is so terribly neglected?  What the fuck is going on?

Maybe there's a newer project out there that replicates that functionality, but I have yet to find it.  My question is, in 2017, what the hell does a Linux user have to do to get a source software package installed cleanly?  Does it require learning to build DEB and RPM packages?  I don't know what that entails, but it probably is a lot more complex than running one command.

I do hope that we're not at risk of losing a traditional way of doing things.  I know that there is a lot of design and skills needed just to create a source software package that works with the configure/make/make install trio, and one day I will have finally learned to code and I will know how to set up the source package to configure and compile in this manner. Maybe I will even become the maintainer of the Checkinstall software.  It seems to me like a very important project in the realm of Linux.  I just wonder though, why isn't anybody else doing this job?


Saturday, 29 July 2017

Debian priority for packages exim4, mailutils etc. just got set to optional

I'm pissed-off.  Maybe it's just because I don't cope well with change.

I'm not a veteran Linux user by any means.  I'm pretty much one level above newbie.  I just don't like change because it makes learning harder when I don't know what to expect.

So two weeks ago I installed Debian 9 from the "Net Installer" into a VirtualBox VM.  Then two days ago I created another VM and did the same install, using the same ISO image, and making the same selections as before.

Only this time, I discovered that the resulting system was missing packages that I expected to be there.  The packages were related to mailing on the local system, and consisted primarily of exim4 and mailutils.

After much painful searching (and putting up with unfriendly skeptics on the Debian forums) I finally found out that the reason these packages are no longer installed is because sometime in the past fifteen days, they changed the package priority from 'standard' to 'optional', causing the Net Installer to no longer download or install them.

What the hell?  I thought that having a 'mail' command was a standard thing.  Well, not anymore, in the world of Debian.  They are changing the fundamentals of the system that I got used to.  But I guess that's what they want to do, and I guess I shouldn't be surprised by that.  The change to systemd was the first major indicator of Debian's new plan.

Well I guess I can't rely on Debian to include the features that a standard *nix system should have anymore.  Not that I'm experienced enough to know all the features that rely on mail.  I do know that I won't get any output from cron jobs now.  But if I file a bug report on that, they'll probably just say "fuck it" and pull cron from their default install as well...

My post to Debian Forums that only got replies from unfriendly skeptics:
http://forums.debian.net/viewtopic.php?f=10&t=134079
My Debian bug report that nobody will care about:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=870030
Debian people discussing "cutting some cruft from priority:standard":
https://lists.debian.org/debian-devel/2014/09/msg00480.html

Tuesday, 12 January 2016

Upgrade Fedora 17 to 18 (End-of-life releases)

I couldn't find any information about upgrading End-of-life fedora versions.  Here is what I learned, as somebody who never upgraded Fedora before.

Setting repositories to the fedora EOL archives:

In order to install anything on an end-of-life release, you need to change the files in /etc/yum.repos.d.  You only need to change the repos who have by default, "enabled=1" in the section.

Comment out the line:

 mirrorlist=...
And uncomment "#baseurl=..." and change it to:

baseurl=http://archives.fedoraproject.org/pub/archive/fedora/linux/releases/$releasever/Everything/$basearch/os
Now you should be able to install the fedup package.

Running fedup:

 I have found that the simple command "fedup --network 18" failed with the message "Downloading Failed: couldn't get boot images".  This is because the archives store the boot images under a different path.  So I ran:
fedup --network 18 --instrepo "http://archives.fedoraproject.org/pub/archive/fedora/linux/releases/18/Fedora/x86_64/os/"
(replace x86_64 with i386 for 32bit installs)

This solves the boot image problem.

NOTE: I had found there were 404 errors running this command, but after I removed everything inside /var/tmp/fedora-upgrade/, there were no more 404 errors.

Friday, 6 December 2013

LXFDVD120 - On the hunt for LXF magazine's DVD

Will the Internet see me through in my mini-quest to find a file?  I'm looking for an ISO image for a certain issue of Linux Format Magazine (LXF).  The original site doesn't have it, and the rest of the Internet appears to be devoid of any sign that this file still exists.

I first got to looking for this file when I started collecting all the DVD's from the LXF site.  They have torrents of most of their recent DVD's, but the one for issue #120 is missing a link for downloading that particular DVD

In January 2013, I fired off an email to their support address, and I sent another one almost a year later.  In the meantime, I have taken a peek into the Internet to see if something showed up.

The main reason I wanted to have this DVD is because I love the 9.04 release of Ubuntu - the Jaunty Jackalope, and I want to see what LXF has done to customize it in this DVD.  It isn't even a good reason really, but I still went after it.

There are some copies available on archive.org.  They have a collection of the LXF DVDs, some much older then my collection of LXF DVDs needs to satisfy my need for relative completeness.  However it is user-maintaned and nobody has uploaded a copy of DVD 120.

Maybe somebody still has the hardcopy DVD - for heaven's sake, it's only 4 years old, this isn't an archeological project here!  There are some "back issues" on sale on ebay, and even amazon offer's a year's subscription to the current version.  Linuxformat.ru has a lot of older issues translated into Russian, available for subscribers, but no DVD's come with those. 

There was  an interesting site that had a listing of all publications by Future Publishing, ltd, which showed the exact DVD I am in search of:
http://www.nsrl.nist.gov/RDS/rds_2.42/ProdList.txt
Future Publishing	LXFDVD120	July 2009

I almost want to contact the company to find out if they have the hardcopy DVD somewhere, but it looks like it is more of a record then an inventory.

Well until it shows up on a torrent site, or somebody uploads theirs to archive.org, I'll just have to leave the Internet to work away steadily, perhaps turning up exactly what I needed in time.  I'll likely forget about the custom jaunty on LXFDVD120 for the time being, but eventually I will revisit this hunt, and I will be eager to try it once I get it.

Friday, 4 October 2013

Ubuntu KVM: virt-manager fails to connect to remote system by default, the fixes:

Two problems:

1. The Ubuntu KVM community wiki doesn't say fuck-all about connecting remotely.  It assumes the user is running it all on their local system.
 
2. The error message gives advice that could mislead the user to install crap on the client side that they do not need, but in either case, has nothing to do with the problem.  The "Details" text is so generic and unhelpful (you will get the same error message when the network is unavailable) that it leaves the user searching endlessly for the right bug to mach the problem.

The (useless and misleading) Error Message Advice

Unable to open a connection to the libvirt management daemon.

Verify that:
 - The 'libvirt-bin' package is installed
 - The 'libvirtd' daemon has been started
 - That you have access to '/var/run/libvirt/libvirt-sock'

The (Unhelpful) Details Text

Unable to open connection to hypervisor URI ...

<class 'libvirt.libvirtError'> virConnectOpenReadOnly() failed
Connection reset by peer
Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/connection.py", line 332, in
_open_thread
    self.vmm = libvirt.openReadOnly(self.uri)
  File "/usr/lib/python2.5/site-packages/libvirt.py", line 144, in
openReadOnly
    if ret is None:raise libvirtError('virConnectOpenReadOnly() failed')
libvirtError: virConnectOpenReadOnly() failed Connection reset by peer

 The Fixes:

(This was done using Ubuntu Hardy (8.04) in virtualbox, but I have seen the same problem with Ubuntu version 12.04, so this seems to apply across many versions of Ubuntu)

So, you installed the KVM packages, or you selected the "Virtualization" option while installing a Ubuntu server, and by default it won't work.  You've played around with the virt-manager interface on your desktop system and that already works, but now you want to manage the server from your desktop.  Here's what is missing and/or hard to find:

1. You must add your user to the libvirtd group in the file /etc/group, on the server.  "user" is the name of your user on your Desktop system, so the names need to match or you'll have to figure out something extra to make it work.

2. To connect succussfully using the ssh method, you must have created a pair of ssh key files (id_rsa, id_rsa.pub) on your Desktop system (using the command ssh-keygen, and NO PASSPHRASE)

Then you would need to copy the .pub file from the Desktop system at /home/<username>/.ssh/id_rsa.pub under a new name as /home/<username>/.ssh/authorized_keys on the server.

virt-manager will not ask you for your password in the normal ssh connection method, it will just fail with the stupid message you see at the top of this page.

Furthermore, if you followed the advice of using "sudo virt-manager" on the client or server as it's first run, it will not be able to start because your home folder will have the config directory ".virt-manager/" owned by root.  In which case, use: 

$ sudo chown -R <user>:<user> /home/<user>/.virt-manager/

From there, you will be able to run the virt-manager without having to be root.



Finally, I just wanted to comment that I originally installed virt-manager on the server, pulling tons of packages into it and essentially installing a whole desktop just to use it.  This way I could get around the problem by connecting with "ssh -X" and remotely using virt-manager.  It was a fucking waste of time.