Houston we have a failure!

No Comments

I was working on my trusty and crusty (15 year old!) desktop Pyg which just got a recent Manjaro reinstall, when suddenly the LVM partitions disappeared. The only clue I had is that the 3 directories that I use to access the logical volumes started throwing Input/Output error. I rebooted and it went straight to the recovery console (which is not a good place to be). I tried manually mounting the LVM partitions and it gave this error:

This was a bit of a shock and no reboot or safe mode can restore the directories. I put it off for the night and took it up again today. I was thinking maybe it was a recent Manjaro system in-place upgrade so I tried booting to the latest Ubuntu livecd (using Ventoy, which is very good for creating a multi-system bootable drive) hoping it will show up my lost LVM partitions.

I booted up to the Ubuntu live desktop and I didnt see any LVM mapper entries either. I see the same 800++GB unallocated space at the tail end of the SSD. I launched a filesystem check which didn’t reveal anything and started searching on how to manually input the the LVM sectors in the GPT metadata.

I searched my Google keep for Pyg’s partition layout and then it hit me that all I am seeing is the SSD. There is a WD Blue mechanical drive in this machine before I upgraded to SSD and I used that for storage for files before they are backed up to either DVD or BlueRay. I checked the BIOS of the desktop and it confimred that the HDD is not showing up. Opening up Pyg seems to be my next weekend project.

Curiously enough, I am not feeling as much worry on losing the files as I remember the first time that my WD Green drive malfunctioned. Maybe those files in the LVM partitions (warehouse [files], limbo [multimedia] and arena [game files[) are really not worth that much in the grand scheme of things. If I can still recover them then thats fine but if not then…

Cautionary Tale for Online Wallet payments

No Comments

Gist: When paying via your online wallets then make sure to take a screenshot before payment.

My internet provider bill (from PLDT) is nearing its due date and I was paying the last few month’s bills via my Maya.ph wallet to take advantage of the crypto coin “cashback” feature. I logged into my Android app and saw that I still have more than 6000 pesos in the wallet which is sufficient to pay for my internet bill. I proceeded with paying for the bill using the saved template and adjusted the amount to Php1900.

Upon submission, the app encountered an error that it didn’t receive a response from the provider and to try again a few minutes later. When I refreshed the app home page my wallet balance dropped to Php4200 but there is no transaction listed for it. I waited for 5 minutes to refresh the Maya app but everything remained the same.

The Maya chatbot is useless and doesn’t want to connect me to a human helpdesk agent so I called up the Maya hotline. The agent who answered it is helpful enough but she does not have any ability to view my prior day balance to confirm the deduction. She can only confirm that there is no new transaction logged to my account since what she saw is the last payment I made a few days back. She suggested to call the PLDT hotline instead and to email support@maya.ph to do a deeper check and to request for an online receipt if needed. It has been 2 days and yet there is still no response on my request except for the automated ticket number.

I dont have any tangible proof other than this claim so it will be very easy for Maya to say it didn’t happen. This is a very costly lesson in proper documentation, and a very negative experience that will prevent or at least minimize my use of the Maya payment system in the future.

Lightning Shock

No Comments

AWS Transfer Service comes with a Bill Shock. Life lessons from not reading the fine print. I cant believe it costs that much to run a relatively passive service. ?

[TipJar] Quickly wipe a file in a Linux shell

No Comments

No time to read through contexts? Jump to the TL;DR; section.

There are use cases that require the secure wiping or deletion of files. There are already a lot of available utilities available in most modern distributions such as srm, swipe, etc. These however entail installing an additional package which is fine for work machines. The use case I had is to securely delete a transient file after it was generated and used in a Continuous Integration server. Installing the secure-delete package is trivial but a base Linux system already has the tool that can do the job: dd

More

Sync clock with Google

No Comments

If the Linux VM was running at the time the host OS (Windows) hibernated then the clock in the guest VM will be left at the time of hibernation. If NTP is configured then the clock should resync gradually but most systems do not apply a big chunk of time correction by default.

If the Linux VM was configured with a VPN that implements a system-wide configuration then the DNS resolution might be unable to resolve the NTP server since the DNS of the VPN session will be unavailable. The alternate and work-around presented hinges on two things:

  • The guest VM has internet access at the time it was resumed.
  • The google.com domain name is already resolved and cached.

   sync-clock() {
     echo Current time before sync: `date`
     echo Current time after sync : `sudo date -s "$(wget -qSO- --max-redirect=0 google.com 2>&1 \ 
       | grep Date: | cut -d' ' -f5-8)Z"`
    }

The work-around uses the time in the google headers to provide an anchor for the correction.  This is provided as a bash function to provide user feedback if a correction was done.

Snippet execution

Fix for crashing gnome-control-center

No Comments

The GNOME control center in my Manjaro system crashed while I was tweaking the details of the mouse settings. After it crashed, I was unable to bring it up again even after a reboot.

The solution was to reset the configuration files for it using the following command:

$ dconf reset -f /org/gnome/control-center/

After that, the control center UI is up and running again. Based on the search hits, visiting the details pane can sometimes caused the state that results to a segmentation fault whenever the control center initializes.

We welcome our new overlords

No Comments

I heard in a Technology training that “It is hard to comprehend that there will come a time that Artificial Intelligence (AI) will get smarter than us humans but it may still be about 50 more years.”

I find that preposterous. AI by definition isnt going to be smarter than human intelligence. It doesnt need to be. Humans on the other hand will just grow dumber. Technology helps.

Docker image cleanup

No Comments

Running docker frequently can lead to mysteriously disappearing space. I kid, its not mysterious. Each image builds a layer which in turn eats up space. Logical but PITA!

Put this in your .bashrc or run it as cron on your docker host to clean up.


docker rm -v $(docker ps -a -q -f status=exited)
docker rmi $(docker images -f “dangling=true” -q)

or do it in one alias (use at your own discretion)


alias docker_janitor='docker rm -v $(docker ps -a -q -f status=exited);docker rmi $(docker images -f “dangling=true” -q)'

sounds too easy..

ciao!

Ubuntu 15.04 hanging boot-up issue

No Comments

Last weekend I reinstalled my Linux system (and not because I want to rename it as Pyg!) which is a very bad decision as it is the same time that my pitiful internet connection fails to transmit any data beyond a few bytes. The new install to Ubuntu 15.04 (Vivid Verve) gave me the confidence to start using btrfs which is being touted as the next generation file system for linux systems. Bad move and very short-sighted on my part. 🙁

The btrfs is supposed to provide enhancements to ext4 file system since it (btrfs) is built from the ground up but with the useful features already present in ext4. The ext4 filesystem is basically being viewed as an aging hack on a hack since it builds on ext3 which is a tack-on improvement for ext2. Btrfs might be delivering on some of its promised features but those who use it should still be wary of it as it is still not considered as ready for “production” or real-world use[1]. I wish I knew this before I chose it as the filesystem for my home directory. 🙁

For those that will have the misfortune of encountering it, the gist of my predicament follows (which is based on how I remember it):

  • I upgraded my system using dist-upgrade.
  • I encountered a severe freeze which left me with no choice but to reach for the power button.
  • System bootup hangs. The last entry goes something like this:
  • A start job is running for /home… (10s /no limit)

  • Eventually the bootup hangs and it drops the control to an emergency shell.

It was unnerving for me since a reinstall means I would need to re-download the upgrade packages on my #@$@#$ Smart LTE connection. The light-bulb moment for me was that /home is the only partition in the sequence that is using btrfs. The working theory is something borked during the forced shutdown which should have been handled by the journalling features. Thanks to an alternate albeit slow alternative connection, I was able to google enough to do the following:

  • Boot the machine and press F12 after the BIOS/UEFI prompt.
  • Selected the menu option for Ubuntu Advance options.
  • Selected the option for “(recovery)”
  • Selected the “drop to root shell” option.
  • Remounted the root system in r/w mode. I dont think this is needed but this is what I have done and it won’t hurt anyhow.
  • # mount -o remount,rw /

  • Did a btrfs file system check on the home partition
  • # btrfsck /dev/sda4

  • When nothing popped out, I went for the repair option.
  • # btrfsck --repair /dev/sda4

The repair option is always accompanied by a cautionary warning as it can delete information so having a proper backup is recommended. In my situation that is like adding salt to the injury. When I ran the repair option it reported that the cache and super generation areas do not match and that it cleared the space cache. After doing so I ran another btrfsck on the partition and rebooted the system which thankfully landed me on a working system with no (identified) data loss.

The moral of the story is to select btrfs only for the filesystems that you can afford to lose data; or have a good backup scheme. Using it for /tmp is going to be an overkill but to each to his own. Now Im thinking if I want to convert my home partition back to ext4. 😀

ciao!

[1] https://www.wikivs.com/wiki/Btrfs_vs_ext4

MongoDB find vs findOne

No Comments

I got hit by this newbie bug. The exercise entails getting a specific record from the database, storing it in a variable, updating it a few times and saving it in the database after each update. Sounds simple, until I am perplexed that I cannot view the variable more than once. The second invocation is just showing or returning an empty string.


> var myobject = db.products.find({_id : ObjectId("507d95d5719dbef170f15c00")})
> myobject
{ "_id" : ObjectId("507d95d5719dbef170f15c00"), "name" : "Phone Service Family Plan", "type" : "service", "monthly_price" : 90, "limits" : { "voice" : { "units" : "minutes", "n" : 1200, "over_rate" : 0.05 }, "data" : { "n" : "unlimited", "over_rate" : 0 }, "sms" : { "n" : "unlimited", "over_rate" : 0 } }, "sales_tax" : true, "term_years" : 2 }
> myobject
>
> //why cant i display the object contents again?
>

It turned out that I should have used db.products.findOne instead. The findOne function returns an actual document record while the find function returns a cursor. Yes, the cursor moved the pointer on the next record after each read request which means that subsequent read calls to it are getting nothing since the cursor was already pointing to the “end of cursor” location after the first read, if I correlate that correctly with how cursors in relational databases work.

Great to know. I want my 15 minutes back. 🙂

ciao!

Older Entries