Εθνική Τράπεζα Phishing

Έλαβα πρόσφατα 2 phishing emails υποτίθεται απο τη Εθνική Τράπεζα Τα μήνυμα και στα δύο ήταν όπως στη παρακάτω εικόνα:

NBG phising email

Το κείμενο του συνδέσμου μπορεί να σε πείσει αλλά σίγουρα όχι ο πραγματικός σύνδεσμος. O υποτιθέμενος σύνδεσμος στη Εθνική Τράπεζα δεν ισχύει. Ακολουθώντας τον όμως ο χρήστης βρίσκεται μπροστά από την εξής φόρμα:

Picture 2

Επίσης αρκετά πειστική για κάποιον που δεν κοιτάει το URL. Οι σύνδεσμοι πάνω δεξία είναι σωστοί και πάλι εύκολα μπορεί κάποιος να πέσει στη παγίδα. Ένα γρήγορα lookup για το domain 4unow.net δείχνει ότι ο server βρίσκεται στις ΗΠΑ:

Picture 1

Ένα whois εύκολα δείχνει τον ιδιοκτήτη:

$ whois 4unow.net
Whois Server Version 2.0

Domain names in the .com and .net domains can now be registered

with many different competing registrars. Go to http://www.internic.net

for detailed information.

   Domain Name: 4UNOW.NET

   Registrar: GODADDY.COM, INC.

   Whois Server: whois.godaddy.com

   Referral URL: http://registrar.godaddy.com

   Name Server: NS11.DOMAINCONTROL.COM

   Name Server: NS12.DOMAINCONTROL.COM

   Status: clientDeleteProhibited

   Status: clientRenewProhibited

   Status: clientTransferProhibited

   Status: clientUpdateProhibited

   Updated Date: 01-mar-2008

   Creation Date: 21-nov-2007

   Expiration Date: 21-nov-2009

[...]

Registrant:

   4unow

   39 Waghorn Road

   Harrow, Middlesex HA3 9ET

   United Kingdom

   Registered through: GoDaddy.com, Inc. (http://www.godaddy.com)

   Domain Name: 4UNOW.NET

      Created on: 21-Nov-07

      Expires on: 21-Nov-09

      Last Updated on: 01-Mar-08

   Administrative Contact:

      Beech, Colin  colin@4unow.co.uk

      4unow

      39 Waghorn Road

      Harrow, Middlesex HA3 9ET

      United Kingdom

      7807821626

   Technical Contact:

      Beech, Colin  colin@4unow.co.uk

      4unow

      39 Waghorn Road

      Harrow, Middlesex HA3 9ET

      United Kingdom

      7807821626

   Domain servers in listed order:

      NS11.DOMAINCONTROL.COM

      NS12.DOMAINCONTROL.COM
Κάνοντας ένα lookup και για το πραγματικό domain της Εθνικής Τράπεζας:

Picture 4
Για να είμαι ειλικρινής δε περίμενα να είναι στην Αυστραλία!
Advertisement

lcfg-xen-0.99.8

New release of the component that fixes Dom0 memory allocation bug (#149). A new resource have been introduced named ‘hostmem’. By default is null and allocates all the available physical memory to Dom0. If a value is set for the resource, the component will use this value as the memory size allocated for Dom0.

RPM: lcfg-xen-0.99.8-1.noarch.rpm
SRC RPM: lcfg-xen-0.99.8-1.src.rpm
Schema file RPM: lcfg-xen-defaults-s1-0.99.8-1.noarch.rpm

lcfg-xen 0.99.6

The new version of the lcfg-xen component for LCFG is now released: 0.99.6. There have been a number of improvements, fixed bugs and additional functions added. All the changes are displayed bellow as recorded in the ChangeLog file:

* New resource added – partition. It defines the root partition id of the disk image to be cloned (e.g. 3).
* Creating different diks entries for HVM and PARAVM systems. PARAVM should use Xen’s blktp driver to handle the file-based virtual block device (VBD).
* Fixed bug that would create a list of different disk images but all with the same hardware identification (e.g. hda). Each disk image will now be given its corresponding letter from the English alphabet.
* Replacing dd with qemu-img when creating the initial disk image for a guest.
* New resources added – onpoweroff, onreboot, oncrash, boot, sdl, acpi, apic, pae, localtime, vnc, vncused, vncdisplay, serial. onpoweroff, onreboot, oncrash are valid for both HVM and PRAVM guests. The rest are valid only for HVM guests. This fixed bug with Bugzilla ID 118.
* cdpath resource assumed that the disk used as CD-ROM was an ISO image. New resource added ‘cdtype’ to define the type of the CD/DVD-ROM, image or physical. This fixes bug with Bugzilla ID 119.
* Support for setting the memory and VCPU for a paravirtualised guest on-the-fly.
* Support for creating multiple bridge network interfaces.
* Support for cloning an LCFG disk image, editing and configuring it automatically according to its profile
* New function for creating a new guest from an existing image (cloning).
* Manages /etc/sysconfig/xend. All settings are editable via resources.
* Manages /etc/sysconfig/xendomains. All settings are editable via resources.
* Manages /etc/xen/xend-config.sxp. All settings are editable via resources.
* New resource for specifying the type of the virtual machine. It can be either HVM of PARAVM.
* New resource for specifying Xen network configuration. Bridge, NAT or route. By default, the BRIDGE interface(s) will be brought up.
* New resource for specifying network configuration per virtual machine. Bridge still has the same resources. NAT and route need a private IP definition. Default definition is bridge. All network resources must be set for a virtual machine to enable network connectivity. The network type for a virtual machine will  be checked against the configuration of the Xen host. If it is different the user will be informed. The default configuration gets both host and guest on bridged network.
* New resource for specifying the physical CPU affinity of the VCPUs.
* A bit of housekeeping has been to the code. Long functions broken down to smaller ones.

RPM: lcfg-xen-0.99.6-1.noarch.rpm
SRC RPM: lcfg-xen-0.99.6-1.src.rpm
Schema file RPM: lcfg-xen-defaults-s1-0.99.6-1.noarch.rpm

Scientific Linux 5.2/5.3 as KVM guest

I have been trying lately to get SL5.3 and 5.2 x86_64 to run as KVM guest. The system would drop every time a “Kernel panic – fatal exception”. I finally mailed today the scientific-linux-users mailing list to check if anybody else have seen the same behaviour. A fast reply reveled that “It is a problem with the KVM version, recent kernels and hardware interrupt virtualization. It can be solved by passing “-no-kvm-irqchip” during the VM startup of the machine.“. Indeed, I tried with that option and it worked like a charm:

# qemu-system-x86_64 node3.img -cdrom /dev/scd0 -boot d -m 1024 -no-kvm-irqchip

Migrating Xen virtual machine to KVM

As I’ve been start looking at KVM recently I realised that Scientific Linux 5.2/5.3 x86_64 wouldn’t built on the VM due to Kernel panic at the initial boot process while i386 would finish installation but would hang on the boot right after the installation (wouldn’t even load Grub – I double checked I was using the right disk image and boot order). Fedora 8 worked fine but I’d prefer to have the same OS version deployed on the different platforms.
I had a go with the existing Xen images I already had. At first place, the Xen VMs were paravirtualised VMs and therefore they were using the Xen kernel, something that doesn’t work with KVM (at least I’m not aware of). The easy work around was to install a normal Linux kernel on the Xen disk image and then use this disk image with a KVM virtual machine. I presume that if the Xen VMs used a normal Linux kernel, there would be no need to do anything except than using the existing image for the KVM VM.
The process has as follows:

At first place create a device mapper for the disk image:

kpartx -a node1.img

You can check which is the device by ‘kpartx -l node1.img’.

# kpartx -l node1.img 
loop0p1 : 0 8177022 /dev/loop0 63

Then mount the filesystem:

mount /dev/mapper/loop0p1 /media/xenvm

Chroot to the the Xen VM filesystem:

chroot /media/xenvm

I had to edit only resolve.conf and add the site’s domain resolver:

vi /etc/resolve.conf

Performed an update on the existing package to avoid any package conflicts during kernel installation and then installed the normal Linux kernel:

yum update
yum install kernel

One of the last touches on the VM’s filesystem was to configure Grub by adding the following as the default boot option:

title Scientific Linux 5
root (hd0,0)
kernel /boot/vmlinuz-2.6.18-128.1.10.el5 root=LABEL=/ ro quiet
initrd /boot/initrd-2.6.18-128.1.10.el5.img
boot

Then exit the chroot environment:

exit

Umount the disk image and remove the mapper:

umount /media/xenvm
kpartx -d node1.img

The disk could then be used with KVM with no problems at all, both via libVirt and direct KVM/Qemu environment.

KVM on Scientific Linux 5.2 in a glance

RedHat announced a few months ago that they’ll drop Xen and go on with KVM instead. Presumably, the Red Hat based distros will do the same. As far as I know, Fedora uses KVM as the default virtualisation technology.

Installing KVM in a glance:

Once KVM is installed, load the appropriate module depending on the processor:

For Intel-VT:

modprobe kvm_intel

For AMD-SVM

modprobe kvm_amd

The required module can be inserted in /etc/modules.conf to automate loading.

The Virtual Manager on RedHat based systems will need to run the qemu-kvm command instead of the qemu-system-* when is about to start a VM instance. RPM packages should provide the qemu-kvm, but when built from source this binary file is not created. Instead there’s a bunch of binary files used for the different supported (virtual) hardware architectures (ppc, sparc, mips, mipsel). The easy workaround is to create a symlink of /usr/bin/qemu-kvm to the desired binary file, but that will cause problem is you try to build a virtual machines other than the architecture the link points to. A script should work though. If you start a VM instance from the command line you can just use the appropriate binary that is available, without the need to worry about symlinks.