« Posts under Hacks

Remotely Reboot Netgear MBRN3000

I made this script to remotely restart our MBRN3000 - Netgear; because there's no way to schedule this directly on the router.
I need to restart because randomly this UMTS router disconnect from the internet using the key and there's nothing  to do  to have the connection up again, except to poweroff the router.

I named the script rebootUMTS.sh and placed in /opt/

this is the code:

#!/bin/sh
wget -q --post-data "button=reboot" http://USERNAME:PASSWORD@172.18.0.2/reboot.cgi
echo `date` " - Restart router" >> /opt/restart.log

and in crontab I've schedule to run every days at 7:00AM 1:00PM 8:PM

*  7,13,20	*	*	*	/opt/rebootUMTS.sh

NFS VMware Datastore with QNAP

The new Qnap nas has also NFS service onboard, so I want try to use it as "addon datastore" for my VMware infrastructure.
I know poor performance but I'll use it to keep CD/DVD images, test virtual machines an why not, backup.

First of all we have to check if our esx hosts can reach the NAS, I mean the VMKERNEL, not the management, so ssh into your ESX and try with

root@esxhost# vmkping your.nas.ip

If you get response you are done, if not you have 2 solutions:

  • put your nas in the vmkernel's network
  • add another vmkernel

I've added another vmkernel, because we use the first for Vmotion and I want keep this separate.

So, open VI client, from inventory view choose "Hosts and Clusters" select the first Esx host and go to "configuration tab.", select "Networking" then "Add Networking".

Using the Wizard, select VMkernel and click Next. According with your network configuration, select the VSwitch that can communicate with your NAS

Give a name, IP Address and netmask (on the same netwok of NAS) to the new interface

Click Next and finish.

Try to "vmkping" and you should see response.

Now repeat these steps for all ESX hosts you have

Instruction From QNAP to use NFS on with VMware are not correct, because ESX is able to use NFS only over TCP; unforntunatley Qnap nas use NFS over UDP.

So we have to "force" the QNAP to use TCP instead of UDP...
In the configuration page on NAS there's no way to change this; so we have to connect in SSH and edit this file:
/etc/init.d/nfs

the line to change is #132

NO_V4="-N 4 --no-udp"

Reload the NFS service

/etc/initd/nfs restart

Now through the web management we can set permission to the share we want use

I permit full access from both esx hosts to this share

So, open VI client, from inventory view choose "Hosts and Clusters" select the first Esx host and go to "configuration tab.", select "Storage" then "Add Storage".

Follow the wizard  for configuration:

Select "Network File System"

Fill with ip address or name of your NAS, in the Path field put the name of the share you previously defined on the NAS

Click Next and finish.

Repeat this step with same data for all ESX hosts and you are done.

N.B.
this how to require you know what are you doing.
I'm not responsible if you destroy your production machine following my instruction.

Migrate IBM System i partition..Vmotion like, not “live”)

Today I'll explain how to move an IBM System i partition between phisical blades; this can be done if you are using an external storage (SAN) and not internal disks..

This is the configuration:

BLADE 6 
partition 1 ->Vios
partition 2 ->Galileo (production System i)
partition 3 -> ArchimedeBK (backup partition)
BLADE 7
partition 1 ->Vios
partition 2 -> Archimede (production System i partition)
partition 3 -> GalileoBK (backup partition)

The goal is to move Archimede production partition from BLADE 7 host to ArchimedeBK partition in BLADE6.

First of all power down the partition, then from the BLADE 7 Vios web interface, check what are disks attached to this partition:

then login to the Vios console with padmin user and check the hdisk devices assignment (in my configuration HDISK>1 because hdisk1 is the internal SAS disk)

$ lsmap ­all
SVSA            Physloc                                      Client Partition ID
­­­­­­­­­­­­­­­ ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ ­­­­­­­­­­­­­­­­­­
vhost0          U7778.23X.067976A­V1­C11                     0x00000000
VTD                   vtopt0
Status                Available
LUN                   0x8400000000000000
Backing device
Physloc
VTD                   vtscsi0
Status                Available
LUN                   0x8100000000000000
Backing device        hdisk2
Physloc               U78A5.001.WIH68A0­P1­C11­L1­T2­W200500A0B8500BB8­L1000000000000
VTD                   vtscsi1
Status                Available
LUN                   0x8200000000000000
Backing device        hdisk3
Physloc               U78A5.001.WIH68A0­P1­C11­L1­T2­W200500A0B8500BB8­L2000000000000
VTD                   vtscsi2
Status                Available
LUN                   0x8300000000000000
Backing device        hdisk4
Physloc               U78A5.001.WIH68A0­P1­C11­L1­T2­W200500A0B8500BB8­L3000000000000
SVSA            Physloc                                      Client Partition ID
­­­­­­­­­­­­­­­ ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ ­­­­­­­­­­­­­­­­­­
vhost1          U7778.23X.067976A­V1­C12                     0x00000000
VTD                   vttape0
Status                Available
LUN                   0x8100000000000000
Backing device        rmt0
Physloc               U78A5.001.WIH68A0­P1­T5­LC000­L0
SVSA            Physloc                                      Client Partition ID
­­­­­­­­­­­­­­­ ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ ­­­­­­­­­­­­­­­­­­
vhost2          U7778.23X.067976A­V1­C14                     0x00000000
VTD                   vtopt1
Status                Available
LUN                   0x8400000000000000
Backing device        /var/vio/VMLibrary/AbetSystemi1
Physloc

VTD                   vtscsi3
Status                Available
LUN                   0x8100000000000000
Backing device        hdisk6
Physloc               U78A5.001.WIH68A0­P1­C11­L1­T2­W200500A0B8500BB8­LA000000000000
VTD                   vtscsi4
Status                Available
LUN                   0x8200000000000000
Backing device        hdisk7
Physloc               U78A5.001.WIH68A0­P1­C11­L1­T2­W200500A0B8500BB8­LB000000000000
VTD                   vtscsi5
Status                Available
LUN                   0x8300000000000000
Backing device        hdisk8
Physloc               U78A5.001.WIH68A0­P1­C11­L1­T2­W200500A0B8500BB8­LC000000000000

Now you can remove vtscsiX devices relative to hdisksX

$ rmdev ­dev vtscsiX

Then remove the hdiskX devices
$ rmdev ­dev hdiskX

Now open the DS storage manager (I have an IBM DS4700 SAN)

Select the Phisical blade containing the partition you want move, on the right, select (once at time) the logical drives and select change mapping and assign to the BLADE 6.

Be carefully to maintain the current LUN ID....


Now login into the Vios console of the second blade (padmin user) and list current devices:

$ lsdev |grep hdisk
hdisk0        Available SAS Disk Drive
hdisk1        Available MPIO Other DS4K Array Disk

we have no disks (or you can see other System i partition's disks); so scan for new disks:

$ cfgdev

and relist devices

$ lsdev |grep hdisk
hdisk0        Available SAS Disk Drive
hdisk1        Available MPIO Other DS4K Array Disk
hdisk2        Available MPIO Other DS4K Array Disk
hdisk3        Available MPIO Other DS4K Array Disk
hdisk4        Available MPIO Other DS4K Array Disk

On the new blade vios's web interface we can assign the "new" disks to the preconfigured backup partition by going to the Storage tab, and flagging the relative hdisks

You can now power on the partition on the second blade..

N.B.
this how to require you know what are you doing.
I'm not responsible if you destroy your production machine following my instruction.

SSD drive On thinkpad T60

Hello,
last week my new SSD drive is arrived, so I've want tell you the REAL performance of this disk.
We bought a 128GB Kingston SSDNow V+ Series sata2 drive model SNVP325-S2/128GB to use on a IBM thinkpad T60.

Before installing this drive I've ran a test on the original sata2 drive from IBM and these are detail:


Writing 1GB

crash@hal9000:~$ dd if=/dev/zero of=test1GB bs=4k count=250000
250000+0 records in
250000+0 records out
1024000000 bytes (1,0 GB) copied, 38,6391 s, 26,5 MB/s

Reading 1 GB

crash@hal9000:~$ dd if=test1GB of=/dev/null bs=4k count=250000
250000+0 records in
250000+0 records out
1024000000 bytes (1,0 GB) copied, 22,2653 s, 46,0 MB/s

As you can the real performance are 26,5 MB/s (write) and 46 MB/s (read)

Then I've cloned my system (use the tool you prefer) an installed the new SSD.
Restored the O.S. with no problem and the operating system (debian squeeze) boot in 7 second....

Amazing...

Then I've repeated the test


Writing 1GB

crash@HAL9000:~$ dd if=/dev/zero of=test1gb bs=4k count=250000
250000+0 records in
250000+0 records out
1024000000 bytes (1,0 GB) copied, 7,39791 s, 138 MB/s

Reading 1 GB

crash@hal9000:~$ dd if=test1GB of=/dev/null bs=4k count=250000
250000+0 records in
250000+0 records out
1024000000 bytes (1,0 GB) copied, 3,50632 s, 292 MB/s

So.. the new performance are 138 MB/s (write) and 292 MB/s (read)

Amazing....

*****************************************

UPDATE  more Info about:

HAL9000:~# hdparm -t /dev/sda

/dev/sda:
Timing buffered disk reads:  334 MB in  3.01 seconds = 110.84 MB/sec

Asus eeebox as squid proxy

Yesterday I'ts arrived the Asus EeeBox model b202 black edition

My goal is to use this compact and very very silent appliance as a linux proxy; so I've to discard all preinstalled system and install a fresh Debian lenny.

First of all you have to choose the installation type and you have 3 way:

1- Usb Pendrive

2- Usb cdrom

3- PXE network

The first I've tried (usb pendrive) fails to boot, I don't know why but after some try I can't boot from USB pendrive; but for me is not a problem; I've also an Asus USB CD/DVDrw so I've tried to use the Debian Lenny net install cd.

Unfortunately debian Lenny installer don't recognize correctly the network card Realtek r8169, and the nework won't work...

Thinking....

Solution: use an USB netowork adapter...

Searching on my desk...

I've found a  Sitecom LN-029 that work perfectly on linux, so the installation now can start.

Install Debian without any task, so deselect "standard system" and "Desktop".

After rebooting system you can install squid and ssh so:
apt-get update && apt-get install squid squidGuard ssh

That's all, now configure squid and squidguard to fit your needs.