SPARCbook 3 (1994?)
Macintosh Powerbook 5300 series (1995)
Ομάδα Ελεύθερου Λογισμικού / Λογισμικού Ανοιχτού Κώδικα Τμήματος Πληροφορικής Πανεπιστήμιου Ιωαννίνων: http://fosscsuoi.wordpress.com/
AutoFS is an automounter of storage devices for Linux and UNIX operating systems. An automounter enables the user to mount a directory only whenever is needed e.g. when it needs to get accessed. After some time of inactivity, the filesystem will be unmounted. The main automounter’s file is /etc/auto.master (or auto_master sometimes, mainly in Solaris).
/misc /etc/auto.misc /net -hosts +auto.master
The interesting parts are the two first entries. The first part of the entry specifies the root directory which autofs will use for the mount points. The second part specifies which file contains the mount points information. For instance, look at the second entry. The /net directory will be used as the root directory and /etc/hosts will be used as the file which contains the mount points information. That means, autofs will make available all the NFS exports of the all hosts specified in /etc/hosts under /net.
Let’s say now that you want to mount when needed, specific NFS exports from your file server. Let’s say that you want to mount them under /mount/nfs. At first place, you’ll need to create the file that will contains the information of the mount points. The file could be /etc/nfstab or whatever you like. You can specify the entries in the following easy understandable format:
music -rw 192.168.1.10:/exports/music photos -ro 192.168.1.10:/exports/photos apps -rw,nosuid 192.168.1.10:/exports/apps
If you want, you don’t specify any options or specify as many as you need. The options that apply on ‘mount’, apply on AutoFs as well. Once you have created the list file, you need to add in /etc/auto.master and it should then look like this:
/misc /etc/auto.misc /net -hosts /mount/nfs /etc/nfstab +auto.master
You can use the automounter in order to mount non-network filesystems.
Next step is to restart autofs daemon. Having done so, you should be able to access the three shares. Note that they may not be displayed under the directory unless you try to access them.
Having a look in /etc/auto.misc gives a few examples:
cd -fstype=iso9660,ro,nosuid,nodev :/dev/cdrom
This will mount the CD/DVD drive under /misc when the user, or a service, will try to access it.
We have a Dell PowerEdge R900 server with two SAS 300GB disk forming a RAID 1. There was a need of around 2TB of space and for that reason we bought 2 x 1TB SAS disks and attached them on the machine.
I’d expect that Linux (Scientific Linux 5.2) would automatically see the disks and display them on the “fdisk -l” output. But unfortunately that didn’t work. Checking with “dmesg” the disks were detected (and the PERC controller of course):
megasas: FW now in Ready state scsi1 : LSI SAS based MegaRAID driver Vendor: SEAGATE Model: ST3300555SS Rev: T211 Type: Direct-Access ANSI SCSI revision: 05 Vendor: SEAGATE Model: ST3300555SS Rev: T211 Type: Direct-Access ANSI SCSI revision: 05 Vendor: SEAGATE Model: ST31000640SS Rev: MS04 Type: Direct-Access ANSI SCSI revision: 05 Vendor: SEAGATE Model: ST31000640SS Rev: MS04 Type: Direct-Access ANSI SCSI revision: 05 usb 1-7: new high speed USB device using ehci_hcd and address 3 Vendor: DP Model: BACKPLANE Rev: 1.06 Type: Enclosure ANSI SCSI revision: 05 Vendor: DELL Model: PERC 6/i Rev: 1.11 Type: Direct-Access ANSI SCSI revision: 05
I was a bit puzzled on why the system didn’t show the disks while being detected. Checking under /proc/scsi/scsi just confirmed that the disks weren’t available to the system at all:
cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 32 Lun: 00 Vendor: DP Model: BACKPLANE Rev: 1.06 Type: Enclosure ANSI SCSI revision: 05 Host: scsi0 Channel: 02 Id: 00 Lun: 00 Vendor: DELL Model: PERC 6/i Rev: 1.11 Type: Direct-Access ANSI SCSI revision: 05
After a bit of Googling I came across a post on a forum which was saying that the disks should be built in a array in order to be available to the system. That actually means that the disks have to be Online and configured on the PERL controller and then the controller would make them available to the system. Next step was to reboot the machine and run the disk configuration utility.
While being in the utility, the steps for creating a RAID0 array for concatenating the disks’ were:
– Select the right PERC controller
– Check disks status on the Physical Disk Management page (the status indication for the new disks was “OFFLINE” so that explained to me why the disks weren’t accessible)
– Return to the Virtual Disk Management page
– Select Controller 1 from the top
– New Virtual Disk
– Select the two available hard drives
– Check available space
– Specify name
– Select stripe option
– OK
– Return to VD Management page
– Exit the utility
– Reboot the machine
– Job done 🙂
Then, “fdisk -l” would display the new /dev/sdb device with 2TB of space free. Just to confirm that everything was there:
cat /proc/scsi/scsi Attached devices: Host: scsi1 Channel: 00 Id: 32 Lun: 00 Vendor: DP Model: BACKPLANE Rev: 1.06 Type: Enclosure ANSI SCSI revision: 05 Host: scsi1 Channel: 02 Id: 00 Lun: 00 Vendor: DELL Model: PERC 6/i Rev: 1.11 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 02 Id: 01 Lun: 00 Vendor: DELL Model: PERC 6/i Rev: 1.11 Type: Direct-Access ANSI SCSI revision: 05
And the last thing was to create a huge filesystem and mount it on the system 🙂
Το LCFG είναι ένα σύστημα με σκοπό την αυτοματοποίηση της εγκατάστασης και της ρύθμισης μεγάλου αριθμού συστημάτων UNIX. Είναι κυρίως κατάλληλο για ταχέως μεταβαλλόμενα περιβάλλοντα με πολλές διαφορετικές διαμορφώσεις. Τα αρχικά που αποτελούν το όνομα του έχουν παρθεί από το Local ConFiGuration.
Η ανάπτυξη του ξεκίνησε το 1993 στο τμήμα της Πληροφορικής του Πανεπιστήμιου του Εδιμβούργου από τον Paul Anderson. Η πρώτη αυτή έκδοση δούλευε μόνο σε Solaris. Μέσα στα επόμενα χρόνια, o Alistair Scobie έφτιαξε ένα port του LCFG για Linux με ένα εντελώς καινούργιο σύστημα εγκατάστασης κάνοντας χρήση πακέτων RPM. Έτσι το LCFG χρησιμοποιούταν για το στήσιμο μηχανημάτων που έτρεχαν Red Hat Enterprise Linux αρχικά και Fedora Core στη συνέχεια. Στα χρόνια που ακολουθούν, στο LCFG προσφέρουν όλο και περισσότεροι κυρίως από τον χώρο του Πανεπιστήμιου.
Τον τελευταίο χρόνο το LCFG μεταφέρθηκε στο περιβάλλον του Scientific Linux 5 το οποίο στην ουσία είναι το Red Hat Enterprise Linux 5 re-compiled.
Πώς δουλεύει
Το κάθε μηχάνημα που ελέγχεται από το LCFG έχει ένα προφίλ σε έναν κεντρικό server. Το κάθε προφίλ έχει ως όνομα αρχείου το hostname του μηχανήματος και κάνει include κάποιους headers με τον ίδιο τρόπο όπως γίνεται και στην C. Ο κάθε header περιγράφει πτυχές της ρύθμισης του συστήματος. Για παράδειγμα το μοντέλο του υπολογιστή, ότι είναι ένας web server κλπ. Με την χρήση των LCFG components, μπορεί το κάθε μηχάνημα να έχει τις δικές του ξεχωριστές ρυθμίσεις οι οποίες θα ελέγχονται από το LCFG.
Ένας δαίμονας στον κεντρικό server παράγει αρχεία XML από το κάθε προφίλ, και στη συνέχεια το δημοσιεύει σε έναν web server από τον οποίο το κάθε μηχάνημα δέχεται το προφίλ του. Κάθε αλλαγή στο προφίλ θα δημιουργήσει νέο XML αρχείο αλλάζοντας έτσι τις αντίστοιχες ρυθμίσεις στο αντίστοιχο μηχάνημα.
Το κάθε LCFG component έχει και από έναν αριθμό scripts τα οποία εγκαθίστανται στον πελάτη ανάλογα με το υποσύστημα το οποίο θα εγκατασταθεί (π.χ. MySQL server, Web server, DNS server). Το κάθε component θα ειδοποιηθεί όταν αλλάξει κάποιο resource σχετικό με τις λειτουργίες του και έτσι θα ενημερώσει κατάλληλα το σύστημα στο οποίο τρέχει. Αντίστοιχα πάλι εάν γίνουν αλλαγές στο σύστημα χωρίς αυτές να έχουν δηλωθεί μέσω του προφίλ, το αντίστοιχο component θα επαναφέρει τις ρυθμίσεις όπως αυτές υπάρχουν στο προφίλ.
Ένα LCFG component είναι υπεύθυνο για το ποια πακέτα είναι εγκατεστημένα στο σύστημα. Ελέγχει πια πακέτα είναι εγκατεστημένα μέσω μιας λίστας η οποία περιέχει τα πακέτα που θα έπρεπε να είναι κανονικά εγκατεστημένα. Εάν κάποιο έχει αφαιρεθεί χωρίς να δηλωθεί στο προφίλ τότε θα εγκατασταθεί ξανά αυτόματα. Ακριβώς το ανάποδο θα συμβεί σε περίπτωση που κάποιο νεό πακέτα εγκατασταθεί χωρίς να έχει δηλωθεί στο προφίλ.
Νέα μηχανήματα μπορούν να εγκατασταθούν αυτόματα κάνοντας χρήση ενός boot CD ή μέσω PXE. Για να πραγματοποιηθεί η εγκατάσταση θα πρέπει να υπάρχει το αντίστοιχο προφίλ για το νέο μηχάνημα. Έτσι, με το πέρας της εγκατάστασης το σύστημα μπορεί να είναι έτοιμο και ρυθμισμένο για την παροχή μια σειράς υπηρεσιών που θα χρειαζόταν ώρες για τη ρύθμιση τους εάν αυτό γινόταν χειροκίνητα.
Τι συστήματα υποστηρίζονται
Σήμερα, το LCFG υποστηρίζει τα Fedora Core 6, Scientific Linux 5, Mac OS X και Solaris 9. Η περισσότερη δουλειά και υποστήριξη είναι αυτή για το Scientific Linux.
Συντακτικό
Πριν δημοσιευθούν τα προφίλ, μεταγλωττίζονται από έναν C preprocessor και εάν δεν βρεθούν σφάλματα τότε δημοσιεύονται. Η χρήση του C preprocessor κάνει το συντακτικό των προφίλ εύκολα κατανοητό. Για παράδειγμα θέλω να δηλώσω ότι ένα νέο μηχάνημα είναι ένας server τύπου PowerEdge 2950 της Dell. Εάν υποθέσουμε ότι υπάρχει ήδη ένας header για τις απαραίτητες ρυθμίσεις για το υλικό του server, το μόνο που μένει να δηλώσουμε στο νέο προφίλ είναι:
# include <poweredge2950.h>
Για την ρύθμιση του eth0 interface:
!network.interfaces mADD(eth0)
!network.device_eth0 mSET(auto)
!network.ipaddr_eth0 mSET(192.168.0.10)
!network.netmask_eth0 mSET(255.255.255.0)
!network.network_eth0 mSET(192.168.0.0)
!network.broadcast_eth0 mSET(192.168.0.255)
!network.onboot_eth0 mSET(yes)
!network.gateway mSET(192.168.0.1)
Πρόσθεση ενός νέου group στο /etc/group:
!auth.extragroup mADD(lcfg)
!auth.grpent_lcfg mSET(lcfg:x:1024:)
Τα “network” και “auth” είναι δύο από τα LCFG components, ακολουθούμενα από τα resources στα οποία θέλουμε να δώσουμε τιμές.
links:
http://www.lcfg.org/
Google Summer of Code on 2006 have sponsored the ZFS-FUSE project which aims to bring Sun’s ZFS on Linux. Because of the incompatibility between Sun’s CDDL license, which ZFS is distributed under, and Linux’s kernel GPL license, ZFS can’t be ported on Kernel level. The workaround to this is to run the filesystem on userspace level with FUSE. A month ago has been released the 0.5.0 version zfs-fuse.
So, how can you get zfs-fuse running on your Linux? In order to compile and use zfs-fuse, you need the following:
– Linux kernel 2.6.x
– FUSE 2.5.x or greater
– libaio and libaio-devel
– zlib and zlib-deve
– glibc version 2.3.3 or newer with NPTL enabled.
– SCons
For compiling the code:
cd zfs-fuse-0.5.0 cd src scons
If all goes fine, then proceed with the installation:
scons install install_dir=/installation/target
If you don’t define a specific directory to be installed in, then the binaries will be placed in /usr/local/sbin. If you install the binaries in a different directory don’t forget to add the directory path into your $PATH:
export PATH=/installation/target:$PATH
Once the installation is finished you can start the zfs-fuse daemon. However, before starting the daemon you’ll need to create the /etc/zfs directory:
mkdir /etc/zfs
That directory used to be automatically created in previous releases but not on 0.5.0 (?). The directory is used to store the zpool.cache file which contains the information of your pools. Having created the directory, what is left is to start the zfs-fuse daemon by simply running the command ‘zfs-fuse’.
You can now create your first pool by issuing the command:
zpool create zfsRoot /dev/sdb
You can replace “zfsRoot” with the name you like for your pool. That will then be created under /. The device at the last part of the command is the disk you want to start using as your ZFS pool. If you want to add /dev/sdc as another one disk to the pool run:
zpool add zfsRoot /dev/sdc
Having the pool set, the new filesystems can start to be created. Creating a filesystem with ZFS is as easy as that:
zfs create zfsRoot/filesystem1
Data can now start being written to the new ZFS filesystem. You’ve probably have mentioned that there hasn’t been defined any capacity for the filesystem. Leaving the filesystem as it is, it will use as much space as it needs and as much as the pool offers. If you want to avoid that, you can set a quota:
zfs set quota=10G zfsRoot/filesystem1
This will set the maximum capacity of the filesystem to 10 GB.
So far so good. What happens if the machine will get rebooted? You’ll need to do some manual work. Having created the /etc/zfs directory everything should be alright. The process should be the following:
– Start zfs-fuse daemon
– import the existing pools:
zpool import
– Mount the existing filesystems
zfs mount zfsRoot/filesystem1
Then everything should be back in place. The problem is that doing this manually every time can cause problems and definitely doesn’t save any time.
I have written a script which starts and stops the zfs-fuse daemon and mounts any existing filesystems. In order to mount the filesystems, the scripts reads the filesystems from /etc/zfstab which specifies the filesystems in the normal ZFS format ‘zfsRoot/filesystem1’.
Depending on the host operating system, the script could be configured to be a startup script so you’ll not have to run the script manually at all.
The script is setZFS
links:
http://zfs-on-fuse.blogspot.com
http://groups.google.com/group/zfs-fuse
One of our servers, a Dell PowerEdge 1850 which was running CentOS 4.5 with Xen hosting a couple of virtual machines, have start flashing one of its front panel lights, next to the hard disks, and beeping as well. The machine was under three year warranty so I thought best would be to contact Dell UK Server Support.
The e-mail conversation follows bellow with some comments from my side between the messages.
My initial message:
Our PE1850 server is having an amber flashing light next to the second hard disk, on the right. There's also a amber flashing light on the back of the server. The machine is on service so I can't perform any of the Dell diagnostic tests. If there's no other way to gather information about the problem, then I'll arrange to do so.
The first response from Dell:
With regards to the drive failure on the system in question , in order to determine root cause of the drive failure on the system we will need to obtain a log from the raid controller . Attached on this email is an program that allows us to grab the hardware log from the raid controller . This can be done within the operating system so there will be no need to down the system in order to do this . If you could reply with the output logfile we will examine it and determine the next action.
From that reply what I understand is that the technician indicates a hard disk failure. Because I didn’t mention something like that I’m just making it clear on my next e-mail. Also, the file attached was called Creating and Using the LSI Controller Log (TTY Log).oft Not an executable to my eyes:
I have to mention that there's no indication if it's a hard disk failure or something else. The disks seem to be working fine so far. We have a few VMs on one of them and none of them have reported any problems. How do I run this binary file in Linux? Making it executable and trying to run it doesn't work.
After a phone call from Dell’s support, we solve the misunderstanding and they send me the right tool to get the required logs off the server and send them to Dell:
As per our telephone conversation please reply to this mailbox with the controller log file .
During the phone call, I asked the friendly Dell support guy to send an engineer to have a look and replace the faulty disk. So:
As discussed you will not be able to receive an engineer till next week . As a result please reply to this email closer to the date you want the service call to take place .
An hour and a half after I received that e-mail, I received another one from another Dell support guy in response to the server’s logs:
This is not a hard disk problem, this is referring to another problem. I read through your logs, and the good news is that your array is in good health. The normal cause would be not having both power leads attached. If this isn't the case, we will need to obtain the logs from the onboard management chip. This will enable us to see exactly why the light is flashing. Have you installed server administrator on the machine?
My reply:
I'm glad to hear that there's not disk array failure and I guess it's good that we didn't close the ticket and sending a technician... I'm not aware if server administrator is installed but as far as I can see (by open ports) it's not running. Is there any quick way to figure out? If it's not installed, is there a way to gather information without running the diagnostics?
I had to ask those silly questions as I wasn’t really familiar with Dell’s server administrator tool.
In response to that e-mail, there’s a third guy replying bringing some light with his e-mail:
I went through the controller log file you sent to us previously one more time and although it indeed lists the 2 HDDs in an online state that was true when the controller first started - around the 14th of July. Since then there is a message stating that disk ID0 failed on the 2nd of November. There were no errors logged prior to disk failure so it is not clear if the disk itself is faulty or not. In either case we must first check if the disk needs to be replaced by running some diagnostics on it. If all diags pass then we can rebuild the disk back into the array. If it turns out that the disk is faulty may I note that according to our records the server has been originally shipped with 2x73GB Seagate HDDs whereas presently there are 2x300GB Fujitsu drives in the system. If the drives were purchased from Dell then we will need a Dell order number for these 2 drives before we can replace them. If you had the drives purchased from a 3rd party then you will have to replace them yourself. Please follow the procedure below: 1. First, what we need to do is make sure the drive is seated properly in its slot. I suggest you remove the drive for 1-2 minutes and then reinsert it back into the system. Doing that alone may force the drive to start rebuilding. Monitor the LEDs on the drive and see if there is any activity on it (blinking green LED) after inserting. If so - the drive is rebuilding and you should check the status of it in 2-3 hours. If after several hours the LED on the drive turns green and the LED on the back of the server turns blue then the drive has successfully rebuilt back into the array. You can leave it at than or proceed with the diagnostics below in case you want to be certain the drive is OK. 2. In order to run diagnostics on the disk after reseating it you can wither you Dell 32-bit diags from a bootable CD: (...) or try running Dell PEDiags from within Linux: (...) Run the extended diagnostics on disk ID0 (or both) and let us know if any errors occur. If any of the disks fail please make sure you have your Dell order number when you contact us again so that we could book you a replacement drive.
Before following his advice and removing the hard disk in order to force the array to re-build, I migrated all of the virtual machines running to another Xen server. I then installed Dell’s server administrator tool and ran the diagnostics:
I have installed pediags and run the diagnostics on all the devices except the NICs as they would be disconnected. There were no errors reported at all and I was wondering if I should proceed with the rebuilding of the array? All the hardware we have into Dell machines is ordered directly from Dell, the same goes for those hard disk drives.
Then, there’s a fourth guy replying:
As long as the diags passed i would suggest to proceed with the rebuild.
And so I did, following their previous instructions. The results were send with my next mail:
I forced the server to rebuilt as you instructed me. I took out the one disk, kept it for 1-2 minutes and then re-inserted it. The only lights were, and still are, flashing amber (on the disk itself) and the status LED remains still flashing red, the same as the flashing light at the back of the server.
And then guess what? There’s a fifth guy replying to my last e-mail:
It looks as though this drive is going to need to be replaced. In the CTRL-R we would not be able to verify the drives. Reseating while up should have caused the drive to try and rebuild. In order to get a replacement drive out to you we are going to need a few details. Can you please supply us with two contact people onsite and their phone numbers as well as the complete physical address of the server including the post code. Will you also let us know if you are happy to fit the new drive on your own or would you prefer and engineer onsite to replace the drive? Thank you for running through these tests with us.
Start getting pissed off:
How will you replace a hard drive that you don't know if it's faulty or not? Both HDD had green lights but the system's LED was flashing amber. When I run the diagnostics I didn't receive any errors from the array or any other part of the hardware. From the systems logs, another Dell support person suggested to rebuild and I did so by taking out one of the hard disks.
Then I have the third guy calling me back and then replying:
As discussed on the phone, there was a misunderstanding on the first mail sent, and this has propogated itself throughout this mail thread. There was a small error several months ago on one of the disks, but this can be ignored as the reason for your current situation. You have proven this by successfully rebuilding the array. The onboard management led is flashing to indicate and error. We can pull the onboard management logs using server administrator, and the Dell diagnostics should also access this. You are going to check both power leads are attached, and if so, run diagnostics to pull the oboard logs. You will then email the response back to this address.
For me, the misunderstanding was still going on:
Unfortunately the misunderstanding still continues.... Maybe my fault. I have followed the process that was described to me in order to re-built the array. That was: take out the hard disk for 1-2 minutes, re-insert it and that should force the rebuilding. I just checked the machine and the OS was frozen. I rebooted and there was a message saying that "Logical Drive(s) failed". I had two options from this point (1) Run the configuration utility and (2) continue. I first choose the 2n option to continue but the OS wasn't loading, then the system was trying to perform a network boot. I rebooted and then chose the first option and got into the configuration utility. Both disks in the configuration menu were marked as "fail". I tried to clean the configuration, erase the existing logical volume and then create a new one. The existing one was erased but I wasn't successful to create a new one. Could you please send a technician next week in order to rebuild the array?
I’d guess that was because I followed the earlier instructions for “rebuilding the disk array”.
But still, a technician will not come. I don’t bother if I can get the disk array easily re-build:
Just tried calling you, I left a message. Getting an array created should take a few seconds on the phone, but what you have done will likely have erased what was on the drives. I'll try to get someone to contact you in the morning, as I am off from tonight until Tuesday morning.
So I got a call next morning and given instructions on how to re-build the disk array. To be honest, I was straight forward configuration if you know where to look at.
Finally, the host started up again with no flashing lights or beeping. The OS and the virtual machines were still there (going against all odds). However, to destroy the array and rebuild it took 10 working days and five Dell employees!
I must say that that was the only bad experience I had with Dell (server) support. Apart from that, most of the other requests were handled in the right manner within two days.
Trying to run the simplest Fortran code that still links to the MPI library provided by OpenMPI will fail. The file is named test.F90 and the code is compiled successfully using mpif90:
mpif90 -o test test.F90
The code itself:
program main use mpi implicit none integer :: ierror call mpi_init(ierror) call mpi_finalize(ierror) end program main
Trying to run the binary:
#./test libibverbs: Fatal: couldn't read uverbs ABI version. -------------------------------------------------------------------------- [0,0,0]: OpenIB on host localhost was unable to find any HCAs. Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: couldn't read ABI version. librdmacm: assuming: 4 libibverbs: Fatal: couldn't read uverbs ABI version. CMA: unable to open /dev/infiniband/rdma_cm
Warnings can be over passed, but the expected output, which is null, isn’t there. Checking with strace it looked like there was a deadlock somewhere.
In order to make it work I had to download OpenMPI source code from www.open-mpi.org and compile it without the OpenIB support:
./configure –prefix=/opt/local/openmpi --without-openib
I also had to set the shared libraries variable LD_LIBRARY_PATH:
export LD_LIBRARY_PATH=/opt/local/openmpi/lib
Για υποστήριξη Ελληνικών στο Slackware χρειάζεται η παρακάτω επιλογή στο xorg.conf στο section του πληκτρολογίου:
Option "XkbLayout" "us+el"
Για αλλαγή της γλώσσας θα πρέπει να δηλωθεί επιπλέον ρύθμιση:
Option "XkbOptions" "grp:alt_shift_toggle,grp_led:scroll"
Η οποία δηλώνει πως η αλλάγη της γλώσσας θα γίνεται με τη χρήση των πλήκτρων Alt και Shift. Σε ενεργοποίηση των ελληνικών, το φωτάκι του scroll θα ανάβει στο πληκτρολόγιο. Το συνολικό section θα πρέπει να μοιάζει με το παρακάτω:
Section "InputDevice" Identifier "Keyboard0" Driver "kbd" Option "XkbModel" "pc105" Option "XkbLayout" "us+el" Option "XkbOptions" "grp:alt_shift_toggle,grp_led:scroll" EndSection
Για ενεργοποίηση του scroll wheel του ποντικιού θα χρειαστούν οι παρακάτω ρυθμίσεις:
Option "Buttons" "5" Option "ZAxisMapping" "4 5"
Κάνοντας το αντίστοιχο section να μοιάζει με το παρακάτω:
Identifier "Mouse1" Driver "mouse" Option "Buttons" "5" Option "ZAxisMapping" "4 5" Option "Protocol" "IMPS/2" Option "Device" "/dev/mouse"
Συνδεόμενος μέσω SSH σε κάποιο μηχάνημα Solaris από OS X, πάω να ανοίξω τον Vi και βλέπω το παρακάτω μήνυμα:
$:vi xterm-color: Unknown terminal type Visual needs addressable cursor or upline capability :
Στη συνέχεια δοκιμάζοντας με Emacs:
$:emacs emacs: Terminal type xterm-color is not defined. If that is not the actual type of terminal you have, use the Bourne shell command `TERM=... export TERM' (C-shell: `setenv TERM ...') to specify the correct type. It may be necessary to do `unset TERMINFO' (C-shell: `unsetenv TERMINFO') as well.
Ο λόγος: το OS X έχει ως προεπιλογμένο τύπο terminal το xterm-color το οποίο το Solaris δεν αναγνωρίζει. Ο τύπος του terminal ορίζεται μέσω της μεταβλητής περιβάλλοντος $TERM. Εάν οριστεί αυτή ως vt100 τότε το Solaris θα είναι και πάλι χαρούμενο:
export TERM=vt100
Για συνεχή χρήση του συγκεκριμένου τύπου terminal μπορεί να προστεθεί ο παραπάνω ορισμό στο .bash_profile του απομακρυσμένου λογαριασμού ή ακόμη καλύτερα να γίνει αλλαγή των προεπιλεγμένων ρυθμίσεων του terminal του OS X:
You must be logged in to post a comment.