Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts

Monday, July 31, 2017

No thumbnails for video previews in Nautilus.

As the default CentOS 7 / Gnome 3 video player, Totem, does not support many of the video formats in use today, thumbnails are not being generated by Nautilus.  (Nautilus uses the Totem player to generate these).  There are two causes to this problem:

1) The mp4,mkv and other formats are not supported by Totem and specific codecs need to be installed.  As posted in the Fedora Forums here are some of the codecs that need to be installed for this to work properly: https://ask.fedoraproject.org/en/question/9267/thumbnail-for-videos-in-nautilus/

yum -y install gstreamer1-libav gstreamer1-plugins-bad-free-extras gstreamer1-plugins-bad-freeworld gstreamer1-plugins-base-tools updates gstreamer1-plugins-good-extras gstreamer1-plugins-ugly gstreamer1-plugins-bad-free gstreamer1-plugins-good gstreamer1-plugins-base gstreamer1

Delete the following directory:

rm -r ~/.cache/thumbnails/fail
 
Logout and log back in, just to make sure Gnome takes the new plugins.  This step may not be necessary, but it may help.

The next thing to do is to increase the size of the files for which Nautilus can generate thumbnails.  By default this is set to 1MB.  (NOTE: Gnome does not recommend increase this size too much due to the impact it will have on performance.  However, the speed at which the thumbnails are generated is largely dependent on the file format of the videos: MP4s are done very quickly, while FLVs take much longer.)  To increase this, navigate to "File"->"Preferences"->"Preview" and change the "Only for files smaller than:" to whichever size you prefer.  I've set mine to 4GBs and performance seems fine.

Saturday, July 29, 2017

Bootloader bug with system-config-kickstart

Here is another minor bug with system-config-kickstart:

During the opening of an existing kickstart configuration file, the application fails to read or load the bootloader configuration.  If this isn't reconfigured within kickstart, saving the file will set the bootloader directive to:

bootloader --location=none --boot-drive=<disk device=>


The location option should have been populated with those from the original file when it was read.

Best thing to do is to double check all of the basic settings once the kickstart config file is created (and saved) and manually edit whatever needs to be adjusted.

Tuesday, July 25, 2017

system-config-kickstart error in CentOS 7

Using system-config-kickstart version 2.9.6-1.el7 in CentOS 7 yields the following error, when attempting to select packages.

"Package selection is disabled due to problems downloading package information."

Screenshot of the message in the "Package Selection" menu.


It seems someone filed a bug with CentOS regarding this problem:  See https://bugs.centos.org/view.php?id=9611

As stated by the bug poster, the issue can be fixed by modifying line 161 of the file: /usr/share/system-config-kickstart/packages.py

156         # If we're on a release, we want to try the base repo first.  Otherwise,
157         # try development.  If neither of those works, we have a problem.
158         if "fedora" in map(lambda repo: repo.id, self.repos.listEnabled()):
159             repoorder = ["fedora", "rawhide", "development"]
160         else:
161             repoorder = ["rawhide", "development", "fedora"]


Becomes:

161             repoorder = ["rawhide", "development", "fedora", "base"]

Restart system-config-kickstart and packages can now be read from the local yum repositories.

Sunday, December 22, 2013

Enabling the talk daemon on Fedora 20

well, it's been a few years and as technology changes, so does the methods used to configure a system.

I still use the talk program on a regular basis.  Here are the instructions for enabling it:

# yum install xinetd talk-server talk

# systemctl enable xinetd.service
# systemctl enable ntalk.service


At this poing, simply starting the xinetd and ntalk services does not seem to allow the talk program to function.  At the moment, the only solution I had was to reboot the system.  If someone has a better way, I would very much like to know.

# reboot

Talk should now work.  However, there is a chance that SELinux will deny it.  Check your logs:

# grep -i denied /var/log/audit/audit.log


If you do get a denial you will need to build a new policy.  Make sure you have the following utility installed: checkpolicy

# yum install checkpolicy

# grep in.ntalkd /var/log/audit/audit.log | audit2allow -M mypol

# semodule -i mypol.pp

That's it.


Thursday, September 12, 2013

Misc Notes

Securely delete files in linux:
# srm --help

# shred --help


---------

Bash history tweaks:
1) Increase history size in Redhat / Fedora / CentOS:

# vim /etc/profile
...
HISTSIZE=100000
...

2) Append history from multiple terminals:

# vim /etc/bashrc
    ...
    fi
    # Turn on append history
    shopt -s histappend
    history -a

    # Turn on checkwinsize

    ...

---------

Copy your public key to a remote host:

$ ssh-copy-id -i ./.ssh/id_rsa.pub remote.domain.com

Tuesday, August 13, 2013

Granular permissions through sudoers

A quick example on how to provide root permissions on specific commands to a specific group of users.

You can create command aliases, which can be very useful when formatting and controlling access to these.

For example:

Cmnd_Alias vi    = /usr/bin/vim

This will match both /usr/bin/vim or just plain vim.

Assigning ROOT permissions to run this command alias to a specific user:

username ALL=(root) vi

And the same for a group:

%groupname ALL=(root) vi

In my example below, I provide access to use all the NGINX service commands on a redhat 6 system, to a new group called nginxadm.

Open up the sudoers file using visudo.

## NGINX USERS - should be part of nginxadm group
# Usage: nginx {start|stop|restart|condrestart|try-restart|force-reload|upgrade|reload|status|help|configtest}
Cmnd_Alias NG           = /sbin/service nginx
Cmnd_Alias NGRES        = /sbin/service nginx restart
Cmnd_Alias NGSTA        = /sbin/service nginx start
Cmnd_Alias NGSTO        = /sbin/service nginx stop
Cmnd_Alias NGSTS        = /sbin/service nginx status
Cmnd_Alias NGCDR        = /sbin/service nginx condrestart
Cmnd_Alias NGTRS        = /sbin/service nginx try-restart
Cmnd_Alias NGFRL        = /sbin/service nginx force-reload
Cmnd_Alias NGUPG        = /sbin/service nginx upgrade
Cmnd_Alias NGRLD        = /sbin/service nginx reload
Cmnd_Alias NGHLP        = /sbin/service nginx help
Cmnd_Alias NGCFG        = /sbin/service nginx configtest
%nginxadm ALL=(root)    NG,NGRES,NGSTA,NGSTO,NGSTS,NGCDR,NGTRS,NGFRL,NGUPG,NGRLD,NGHLP,NGCFG


Thanks to FACLs in Linux, we can also give granular permissions to the NGINX configuration files.

Thursday, July 18, 2013

IBM DB2 connector - executable stack error

I ran into this issue following the installation of a new IBM DB2 connector library for use with PHP on an Apache server.

PHP Warning:  PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/ibm_db2.so' - libdb2.so.1: cannot enable executable stack as shared object requires: Permission denied in Unknown on line 0

As it turns out, Dan Walsh already wrote up something about this on his blog in 2011:

http://danwalsh.livejournal.com/38736.html

Following his instructions, I've opted to clear the executable stack flag.  I've modified his one liner command to execute the 'clear' command on all the files found.  It would be tedious to change them one by one.

find /opt/ibm/db2/V9.7/lib64 -exec execstack -q {} \; -print 2> /dev/null | grep ^X | cut --delimiter=' ' -f2 | xargs -n 1 execstack -c

This fixed the problem.

Wednesday, July 10, 2013

List users in linux

Here is a quick tip on how to more or less reliably list all non-system users and print out status information about their accounts, in linux.

grep -i ':/home/' /etc/passwd | cut -d: -f1 | xargs -n 1 passwd -S

Step 1:
We use grep to look for entries in the /etc/passwd file where the /home/ directory is specified.  System users don't normally have such a directory.

Step 2:
We then use cut to retain only the username, and we specify : (colon) as a delimiter.

Step 3:
We run the usernames through xargs to pass each username as a single argument to the passwd -S command.

passwd -S prints out short status of the username.  Whether it's locked, expired, etc...

Thursday, May 2, 2013

Using GDB to explore a core file

Before we go any further, please note that these are instructions for a Centos 6.3 OS.  They should work in a similar way on Redhat and Fedora.

Recently, I've been experiencing issues with pacemaker immediately after ugprading from one version to another.  To make a long story short, crmd, which is part of the pacemaker package, was crashing every 15 minutes and dumping a core file.  It became necessary to read the details of the core dump using gdb.  To be fair though, the service itself stayed up fine but kept respawning child processes.  So, even though it complains and dumps a core file, it is still robust enough to continue operating.

Now the issue becomes, how to get the error content out of the core file?  We first need to make sure the right repositories are available for our OS:

# DEBUGINFO
[debuginfo]
name=debuginfo
baseurl=http://debuginfo.centos.org/$releasever/$basearch
enabled=0
gpgcheck=0

You will need a tool called 'debuginfo-install' which is part of a yum utility package.

# yum install yum-utils

Install gdb

# yum install gdb

Run gdb against your executable and it's core file.  For example:

# gdb /usr/libexec/pacemaker/crmd ./core.26688

If you are missing any debuginfo file, gdb will let you know which ones.  You can then install them as required.  For example:

Missing separate debuginfos, use: debuginfo-install audit-libs-2.2-2.el6.x86_64

Simply run the suggested command to get the right debuginfo files.

# debuginfo-install audit-libs-2.2-2.el6.x86_64 --enablerepo=debuginfo

Once you've got all your debug info files, run the gdb debugger once again.

# gdb /usr/libexec/pacemaker/crmd ./core.26688
...
Core was generated by `/usr/libexec/pacemaker/crmd'.
Program terminated with signal 6, Aborted.
#0  0x00007f81896ac8a5 in raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
64        return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
Missing separate debuginfos, use: debuginfo-install libtool-ltdl-2.2.6-15.5.el6.x86_64
(gdb)


At this point you are in the gdb prompt.  Use "bt" to output the backtrace of the error.

(gdb) bt
#0  0x00007f81896ac8a5 in raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1  0x00007f81896ae085 in abort () at abort.c:92
#2  0x00007f818bb8a56b in crm_abort (file=0x7f818bba9d58 "xml.c", function=0x7f818bbab6b4 "string2xml", line=650,
    assert_condition=0x7f818bbaa01a "String parsing error", do_core=, do_fork=) at utils.c:1073
#3  0x00007f818bb933af in string2xml (
    input=0x1e745f8 "#4  0x00007f818b76a2fc in lrmd_ipc_dispatch (buffer=, length=, userdata=0x1e72910) at lrmd_client.c:310
#5  0x00007f818bba2e90 in mainloop_gio_callback (gio=, condition=G_IO_IN, data=0x1e73be0) at mainloop.c:585
#6  0x00007f8188fbbf0e in g_main_dispatch (context=0x1d4f120) at gmain.c:1960
#7  IA__g_main_context_dispatch (context=0x1d4f120) at gmain.c:2513
#8  0x00007f8188fbf938 in g_main_context_iterate (context=0x1d4f120, block=1, dispatch=1, self=) at gmain.c:2591
#9  0x00007f8188fbfd55 in IA__g_main_loop_run (loop=0x1e734a0) at gmain.c:2799
#10 0x00000000004052ce in crmd_init () at main.c:154
#11 0x00000000004055cc in main (argc=1, argv=0x7fffe77a4f88) at main.c:120
(gdb)


This is the information we were looking for.

Thanks to Andrew Beekhof of  http://clusterlabs.org/ for pointing me in the right direction with GDB.

Tuesday, April 9, 2013

Log file system changes using Audit

Adding an audit rule to log file-system activity:

# auditctl -w /home/something/ -p rwa

Flags:

-w Insert watch
-p Set the permission filter

If you search the audit log, you will only get results if there has been activity:

# ausearch -f /home/something/
<no matches>

Now, if we create a new file and search again:

# touch /home/something/testing1

# ausearch -f /home/something/
----
time->Tue Apr  9 08:53:08 2013
type=PATH msg=audit(1365511988.313:969510): item=1 name="/home/something/testing1" inode=54411 dev=08:15 mode=0100644 ouid=0 ogid=0 rdev=00:00 obj=user_u:object_r:user_home_t:s0
type=PATH msg=audit(1365511988.313:969510): item=0 name="/home/something/" inode=54374 dev=08:15 mode=040700 ouid=1041 ogid=1041 rdev=00:00 obj=user_u:object_r:user_home_dir_t:s0
type=CWD msg=audit(1365511988.313:969510):  cwd="/root"
type=SYSCALL msg=audit(1365511988.313:969510): arch=c000003e syscall=2 success=yes exit=0 a0=7fff217f4cb6 a1=941 a2=1b6 a3=32cc35410c items=2 ppid=10799 pid=26986 auid=1041 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=130958 comm="touch" exe="/bin/touch" subj=user_u:system_r:unconfined_t:s0 key=(null)


Let's delete the file and see what happens to the logs.

# rm /home/something/testing1

Run the search again:

# ausearch -f /home/something/

Notice the syscall key which is displayed in each entry.  What does the code mean?  Let's make things more 'readable' by setting the -i flag.

# ausearch -f /home/something/ -i
----
type=PATH msg=audit(04/09/2013 09:00:52.093:969611) : item=1 name=/home/something/testing1 inode=54411 dev=08:15 mode=file,644 ouid=root ogid=root rdev=00:00 obj=user_u:object_r:user_home_t:s0
type=PATH msg=audit(04/09/2013 09:00:52.093:969611) : item=0 name=/home/something/ inode=54374 dev=08:15 mode=dir,700 ouid=something ogid=something rdev=00:00 obj=user_u:object_r:user_home_dir_t:s0
type=CWD msg=audit(04/09/2013 09:00:52.093:969611) :  cwd=/root
type=SYSCALL msg=audit(04/09/2013 09:00:52.093:969611) : arch=x86_64 syscall=unlink success=yes exit=0 a0=7fff2c3b2cbc a1=1 a2=2 a3=168f7610 items=2 ppid=10799 pid=27302 auid=something uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts0 ses=130958 comm=rm exe=/bin/rm subj=user_u:system_r:unconfined_t:s0 key=(null) 


The timestamps are now all converted to human-readable formats.  The UIDs and GIDs are converted to the name of the user or group and finally, the system call codes show something intelligible.  For example, syscall=87 now reads syscall=unlink which which we can interpret as 'delete'.

You can search by system call codes as well.  Instead of displaying all activity on all files and reading through each entry one by one, you can search for 'unlink' system calls.

The flag is -sc <syscall>

For example, the following command will return the log(s) entry(ies) showing an 'unlink' call.

# ausearch -f /home/something/testing1 -i -sc unlink

To remove a watch, use the -W flag.  Note, when using this flag, the remove (-W) command must match the rule.  If you don't know the exact rule, you can list them:

# auditctl -l

LIST_RULES: exit,always dir=/home/something (0xe) perm=rwa

Now we can delete it using:

# auditctl -W /home/something -p rwa

Wednesday, December 12, 2012

Extending root partition on the fly - Part 2

In my previous post I discuss some techniques to expand disks while ensuring zero downtime.

These techniques are not always viable as it will depend on the Operating System version you have, as well as LVM version.  If you don't use LVM and instead use native linux partitions, then things can get a bit uglier and you will probably need one or two reboots.  In CentOS 5.2 I can't properly re-scan the ISCSI devices.  It seems support for it may have only been added in version 5.4, as RedHat began adding shell scripts to perform this operation.

In any case, for the latest version of RedHat and Centos (5.5,6,6.2,6.3) the previously described techniques work just fine.  There is only one caveat.  In order to prevent having to disable the volume group, unmounting the filesystem and stopping services that are using them, one must create a new "Physical Volume" using pvcreate, as I've done.  The only problem with this, and it's not a big one, is that you end up with separated physical volumes all on the same LVM partition.

If you want to expand the disk but have only one physical volume in the LVM, it will be necessary to disable the volume group in order to use pvresize.  Note that this implies shutting down services and unmounting the filesystem to be expanded.

Example:

1) Extend disk in vmware

2) Rescan the disk
# echo "1" > /sys/class/scsi_device/<device number>/device/rescan

3) resize the partition in question using fdisk <device>
# fdisk /dev/sdb

4) re-read the partition table:
# partprobe

5) If your service is apache:
# service httpd stop

6) Disable the volume group
# vgchange -a n <volume group name>

7) Physical Volume Resize
# pvresize /dev/sdb1

7.1) If you check your physical volume now you should see the free extents:
# pvdisplay

8) Re-enable the volume group
# vgchange -a y <vg name>

9) Re-mount the device
# mount /var/www (for example) or mount -a

10) Restart your service
# service httpd start

11) Now note that we've only expanded the Volume Group and neither has the Logical Volume or the File System been extended, but we can do these on the fly so it's o.k. if they are mounted.
# lvextend -l +<num of free extents> /dev/<vg name>/<lv name>

13) Resize the filesystem
# resize2fs /dev/<vg name>/<lv name>

In our case we find that it's helpful to test these procedures on clones and ensuring we have the most appropriate technique for the situation.

Tuesday, October 23, 2012

Extending root partition on the fly - linux on vmware

You've extended your VM's only disk by a bunch of Gigabytes.  You have Apache / MySQL running on it and you can't afford any downtime.  You now need the Operating System to recognize all that new space.  What can you do?  Expanding a root partition at runtime with a guest linux OS, requires a bit of planning but is still fairly straightforward.

The following was performed on a CentOS 5.5 Guest VM running on VMWare.

Let's see how much space we have right now:

[root@... ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                       18G   11G  6.5G  61% /
/dev/sda1              99M   13M   82M  14% /boot
tmpfs                 2.0G     0  2.0G   0% /dev/shm


First, make sure the OS can recognize that the hardware actually changed.  Rescan your SCSI device:

# echo "1" > /sys/class/scsi_device/<device number>/device/rescan

Next comes the fun part.  This is where planning for disaster comes in handy; so even though you don't want to take any downtime, plan for it:  Take a snapshot of your VM.

Format your device and add a new partition.  In my case I have a /boot partition and a / partition so my new partition will have number 3 or /dev/sda3.

# fdisk /dev/sda

-- if we now try to create a new physical volume, it should fail until we run the partprobe command.

# partprobe

UPDATE: Since RedHat 6 (CentOS 6), you can use the partx command to force the changes to take effect on the partition table.  Note that partx does not do the same validation as partprobe, so if you've made mistakes with your partition layout, data can be erased.  If you are certain that your layout is correct, then proceed using:

# partx -l /dev/sda


And to force changes to take effect:

# partx -v -a /dev/sda

( See RedHat's recommendation at: https://access.redhat.com/site/solutions/57542 )

-- Create a physical volume from the new partition now that the kernel knows about it.

# pvcreate /dev/sda3

-- Extend the volume group to use up all space on the new physical volume.

# vgextend VolGroup00 /dev/sda3

-- Extend the logical volume by the number of free physical extents available in the group (use +)

# lvextend -l +3200 /dev/VolGroup00/LogVol00

-- Finally run an online resize of the mounted partition without affecting anything.

# resize2fs /dev/VolGroup00/LogVol00


That's it.  Now run df -h:

[root@... ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      115G   11G   99G  10% /
/dev/sda1              99M   13M   82M  14% /boot
tmpfs                 2.0G     0  2.0G   0% /dev/shm



----------------- EXTENDING A SWAP PARTITION -----------------

If you are doing this on a SWAP partition, then make sure you follow these instructions:

After extending the VolumeGroup, disable your swap:

[root@... ~]# swapoff -a

Extend your swap logical partition ( by 320 Physical Extents in my case ):

[root@... ~]# lvextend -l +320 /dev/VolGroup00/LogVol01

Check your logical volume size:

[root@... ~]# lvdisplay /dev/VolGroup00/LogVol01
  --- Logical volume ---
  LV Name                /dev/VolGroup00/LogVol01
  VG Name                VolGroup00
  LV UUID                --- ---- ---- --- ----
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                11.97 GB
  Current LE             383
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1


Use mkswap to recreate a new swap partition.  There is no need to worry about the data because when swap is disabled, it should not contain any data.

[root@... ~]# mkswap /dev/VolGroup00/LogVol01
Setting up swapspace version 1, size = 12851343 kB


Now restart your swap and check your memory:

[root@... ~]# swapon -a

[root@... ~]# free -m
             total       used       free     shared    buffers     cached
Mem:         24106       2917      21188          0        371        484
-/+ buffers/cache:       2061      22045
Swap:        12255          0      12255

Activate memory in Linux at run time using bash - vmware

In following with my previous post about "Adding scsi device at runtime on linux guest VM," I am adding some information here on how to use a bash "for-loop" to activate memory; it was added at runtime on a VMWare guest.

I've tested this on a CentOS 5.5 system.

First, add the new memory using VMWare VSphere.

Second, find the new memory that is currently listed as "offline".

[root@... ~]# grep offline /sys/devices/system/memory/*/state
/sys/devices/system/memory/memory40/state:offline
/sys/devices/system/memory/memory41/state:offline
/sys/devices/system/memory/memory42/state:offline
/sys/devices/system/memory/memory43/state:offline
/sys/devices/system/memory/memory44/state:offline
/sys/devices/system/memory/memory45/state:offline
/sys/devices/system/memory/memory46/state:offline
/sys/devices/system/memory/memory47/state:offline
/sys/devices/system/memory/memory48/state:offline
/sys/devices/system/memory/memory49/state:offline
/sys/devices/system/memory/memory50/state:offline
/sys/devices/system/memory/memory51/state:offline
/sys/devices/system/memory/memory52/state:offline
/sys/devices/system/memory/memory53/state:offline
/sys/devices/system/memory/memory54/state:offline
/sys/devices/system/memory/memory55/state:offline
/sys/devices/system/memory/memory56/state:offline
/sys/devices/system/memory/memory57/state:offline
/sys/devices/system/memory/memory58/state:offline
/sys/devices/system/memory/memory59/state:offline
/sys/devices/system/memory/memory60/state:offline
/sys/devices/system/memory/memory61/state:offline
/sys/devices/system/memory/memory62/state:offline
/sys/devices/system/memory/memory63/state:offline
/sys/devices/system/memory/memory64/state:offline
/sys/devices/system/memory/memory65/state:offline
/sys/devices/system/memory/memory66/state:offline
/sys/devices/system/memory/memory67/state:offline
/sys/devices/system/memory/memory68/state:offline
/sys/devices/system/memory/memory69/state:offline
/sys/devices/system/memory/memory70/state:offline
/sys/devices/system/memory/memory71/state:offline



Then, use a for loop to activate that memory:

[root@... ~]# for memcount in {40..71}; do echo online > /sys/devices/system/memory/memory$memcount/state; done


Check the new memory is active:

[root@... ~]# free -m
             total       used       free     shared    buffers     cached
Mem:          8045        854       7190          0          8         87

Friday, October 12, 2012

Pre-allocating RAM on a Virtualbox guest

One of the problems with guest VMs in Virtualbox is that RAM is dynamically allocated by the host as the guest uses an increasing quantity of memory.  This is fine if you run many VMs which do not always need all the memory allocated to them at once; but you will find it inconvenient if you really need to allocate specific (read: large) amounts of memory to a VM.  This is especially true when running a guest on a host like windows 7, where superfetch has already allocated chunks of memory to different applications.  When the guest requests more memory, the OS does not give it, because it is only available to other programs which may not necessarily need it.

-- The solution then...

Forcing VirtualBox to "grab" all of the guest's memory at startup is possible.  This will attempt to allocate the entire guest memory from the host.  If that memory isn't really free, then the guest will not start at all.

There is a boolean flag which can be set as follows:

VBoxManage setextradata <VM NAME> VBoxInternal/RamPreAlloc 1

That's all.

Documentation: Unfortunately there is very poor documentation on the "VBoxInternal" keyset; probably because it is mainly used for development purposes and not necessarily "real" day-to-day use.  Consider this then, a hack.

The possible variables that can be set using VBoxInternal seem to be defined in the following C++ header file:

http://www.virtualbox.org/svn/vbox/trunk/src/VBox/VMM/include/PGMInternal.h

Here is another interesting file worth reading:

http://www.virtualbox.org/svn/vbox/trunk/include/VBox/err.h

I would be very careful with attempting to set any of these without a good understanding of the VirtualBox codebase.

Wednesday, October 10, 2012

Adding or resizing scsi device at runtime on linux guest VM

It's always a tricky thing to add more disk space to a VM when you don't want to take down it's services to reboot the box.  Linux doesn't need to be rebooted just to know that a new device has been attached, but how do you get it to recognize it?

As per a great blog post by Vivek Gite from NixCraft on: http://www.cyberciti.biz/tips/vmware-add-a-new-hard-disk-without-rebooting-guest.html

The basic command to re-scan scsi devices is:

echo "- - -" > /sys/class/scsi_host/<host#>/scan

fdisk -l

tail -f /var/log/message


If you do not have a new disk, but instead have increased the size of an existing disk, then you must rescan the device.  Note that this may not be appropriate if the device is used for the /boot partition.

# echo "1" > /sys/class/scsi_device/<device>/device/rescan


Friday, August 10, 2012

Parse Apache Logs by Date Range

Parsing apache logs by date and by date ranges can be fairly simple with a bit of awk scripting.

We use AWK to compare date fields in order to retrieve specific rows.

The date fields between access logs and error logs can vary, so some adjustments are needed:

Note that the date field is contained within a single column in the access_log file, therefore we can do a comparison against a single column.  Typically column #4.

AWK Date Range for access logs:

$ awk '$4>"[09/Aug/2012:15:00:" && $4<"[09/Aug/2012:15:59:"' ./access_log | less

The date field in the error log is in separate columns.  Example: [Thu Aug 09 15:30:...  That in itself is four columns.  They must be combined in order to be compared effectively.  To do this, we assign a combination of those four columns to two variables: $from and $two.  We then use these two variables for the comparison.  See below:

AWK Date Range for error logs:

$ awk '$from>"[Thu Aug 09 15:30:00" && $to<"[Thu Aug 09 15:59:00"' from='$1 " " $2 " " $3 " " $4' to='$1 " " $2 " " $3 " " $4' ./error_log | less

Tuesday, June 26, 2012

yum - Error: database disk image is malformed

If you've ever gotten this cryptic error using yum, you'll find that it's very difficult to pinpoint the cause. The message itself, "database disk image is malformed," refers to a corrupted sqlite file. However, the RPM and YUM systems use a variety of different such files; therefore finding the right one can be difficult.

The best thing to do, is to start by attempting to fix this using the available command line tool:

# yum clean all

This should solve the problem in most cases.  If the problem continues, perhaps the RPM database files are corrupted. One of my previous articles talks about rebuilding these, but I will go over it again here:

The database files are located in "/var/lib/rpm" and are named __db.001 __db.002 etc... etc...

Delete those files:

# rm -f /var/lib/rpm/__db*

Rebuild the database:

# rpm --rebuilddb

Then try to clean the yum cache as per the above command and try your yum command again.  If this continues to fail, try deleting your yum cache manually:

# rm -Rf /var/cache/yum

Now try the command again.  This should have gotten rid of the last sqlite files yum could possibly use.  The command should be able to rebuild all the databases correctly at this point.

Thursday, April 12, 2012

yum crashed with python import error - fixed corrupted rpm database

I ran into an interesting error while trying to find out, which repository one of my installed packages came from.  Before we proceed, let me explain that I had need to use the "repoquery" utility which is part of the "yum-utils" package.  I proceeded to install this one as I did not yet have it.  The installation worked perfectly well and did not install any dependencies.

# yum install yum-utils -y

Using the repoquery command, I attempted to query which repo my php53 package came from:

$ repoquery -i php53

Instead of getting the information I wanted, the script crashed with the following error:

File "/usr/bin/repoquery", line 38, in
from yum.i18n import to_unicode
cannot import name to_unicode

Googling around, many blog posts and site talked about the yum installation being broken.  That may well be, but I decided to see if maybe there was something a bit simpler at play here.  First I needed to find out which yum packages are already installed on this system:

# rpm -qa | grep -i yum

yum-fastestmirror-1.1.16-16.el5.centos
yum-3.2.22-37.el5.centos
yum-updatesd-0.9-2.el5
yum-utils-1.1.16-16.el5.centos
yum-metadata-parser-1.1.2-2.el5

NOTE: The listed version numbers do not reflect the original numbers I had on my system.  I went ahead and attempted to update all of the above listed packages:

# yum update yum yum-fastestmirror yum-updatesd yum-metadata-parser

Yum only found that yum and yum-fastestmirror needed to be updated.  I proceeded with the update.

After the update, the repoquery command started working perfectly well.  However, a completely unrelated problem occurred which I will discuss very briefly.

# repoquery -i php53

Instead of getting a nice listing of information from the RPM database, I received an error message saying the database was corrupted.  The next step then was to rebuild the dabase.  

The database files are located in "/var/lib/rpm" and are named __db.001 __db.002 etc... etc...

Delete those files:
# rm -f /var/lib/rpm/__db*

Rebuild the database:

# rpm -vv --rebuilddb

Once completed, I tried the repoquery command once again:

# repoquery -i php53

Name        : php53
Version     : 5.3.3
Release     : 1.el5_7.6
Architecture: x86_64
Size        : 3591477
Packager    : None
Group       : Development/Languages
URL         : http://www.php.net/
Repository  : updates
Summary     : PHP scripting language for creating dynamic web sites
Description :
PHP is an HTML-embedded scripting language. PHP attempts to make it
easy for developers to write dynamically generated webpages. PHP also
offers built-in database integration for several commercial and
non-commercial database management systems, so writing a
database-enabled webpage with PHP is fairly simple. The most common
use of PHP coding is probably as a replacement for CGI scripts.

The php package contains the module which adds support for the PHP
language to Apache HTTP Server.


Thursday, February 16, 2012

Add date to Bash History

In order to add a date stamp to your bash history add the following two lines to your .bash_profile:

HISTTIMEFORMAT='%F %T '
export HISTTIMEFORMAT

Alternativelly, you can set this variable globally and have all history files keep the data by setting these two lines in a file under the /etc/profile.d directory.

echo "HISTTIMEFORMAT='%F %T '" > /etc/profile.d/histtimestamps.sh
# echo "export HISTTIMEFORMAT" >> /etc/profile.d/histtimestamps.sh

# chmod +x /etc/profile.d/histtimestamps.sh

Your history will look like this:
...
902 2012-02-16 09:50:33 cd /var/log
903 2012-02-16 09:50:33 ll
904 2012-02-16 09:50:33 ls -lat | sort -t
905 2012-02-16 09:50:33 ls -lat
...

Monday, February 13, 2012

Build an SELinux policy from an audit log

Often certain commands in linux will simply fail without any messages in /var/log/messages, or seemingly anywhere else... where we usually check. However, if you look at the selinux audit logs, sometimes the error messages are there. /var/log/audit/audit.log.

For example, every once in a while after a kernel update, I can't use the talk program. It simply says the connection is being refused by the other use. Since I already know Selinux is the culprit I grep the logs:

grep -i talkd /var/log/audit/audit.log

The result:

type=AVC msg=audit(1329155365.865:143): avc: denied { open } for pid=5631 comm="in.ntalkd" name="1" dev=devpts ino=4 scontext=system_u:system_r:ktalkd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:user_devpts_t:s0 tclass=chr_file
type=SYSCALL msg=audit(1329155365.865:143): arch=c000003e syscall=2 success=no exit=-13 a0=7fffc83c0eb8 a1=101 a2=7fffc83c0ec3 a3=7fffc83c0690 items=0 ppid=5630 pid=5631 auid=4294967295 uid=99 gid=5 euid=99 suid=99 fsuid=99 egid=5 sgid=5 fsgid=5 tty=(none) ses=4294967295 comm="in.ntalkd" exe="/usr/sbin/in.ntalkd" subj=system_u:system_r:ktalkd_t:s0-s0:c0.c1023 key=(null)

Two entries showing that talk is denied. If you really want to authorize this process grep the tail end of the file and use audit2allow to generate a policy file that will allow this.

tail /var/log/audit/audit.log | grep '1329155365.865:143' | audit2allow -M talkpolicy

audit2allow generates a talkpolicy.pp file and will also give you instructions on how to activate it. That would be:

semodule -i talkpolicy.pp

This will take a minute or two and has effectively authorized the blocked program to run.