Tuesday, March 8, 2011

Creating Zones on New Disk Using Veritas

1. df -h
2. format   <-------- create root tag partition s0 = 5g    (Please create partition on empty disk )
3. newfs /dev/dsk/c1t2d0s0
#mkdir zones
4. mkdir /zones/bdxapi01
5. vi /etc/vfstab
/dev/dsk/c1t2d0s0       /dev/rdsk/c1t2d0s0      /zones/bdxapi01 ufs     2       yes     logging
6. mountall
Note: Please enable VERITAS if not enabled
vxdctl mode  ( to check the status )
vxconfigd -k  ( to disable the VERITAS )
vxdctl init ( if volboot file not exist )
vxdctl enable ( to enable the VERITAS )
vxdisk list  ( to check the VERITAS disk information )
7. Vxdiskadm option-2 (encapsulate)
- localdg ------>Specify the disk group name
- local01  .......Specify the disk name
- sliced  ......Specify the format model (by default cds format)
=====
If encapsulation fails... then
a. vxdisksetup -if c1t2d0
b. vxdiskunsetup -C c1t2d0
c. then do steps 2-7 again
=====
8. reboot
9. vxprint -ht
10. vxmksdpart -g localdg local01-01 0 0x02 0x00
vxmksdpart -g <disk group >  <sub disk>  slice tag flag
Or
vxmksdpart -g bdxapi01_dg0 bdxapi01_dg001-01 0 0x02 0x00  ( This is important step for future updates )
vxmksdpart -g bdxapi02_dg0 bdxapi02_dg001-01 0 0x02 0x00
11. A) vxedit -g localdg rename rootvol2-01 z1-01
vxedit -g <disk group> rename <plex old name>  <plex new name>
Or
vxedit -g bdxapi01_dg0 rename rootvol2-01 bdxapi01-01  (to rename plex)
vxedit -g bdxapi02_dg0 rename rootvol2-01 bdxapi02-01
 b) vxedit -g localdg rename rootvol2 z1
vxedit -g <disk group> rename <volume old name>  <volume new name>
Or
vxedit -g bdxapi01_dg0 rename rootvol2 bdxapi01   ( to rename volume )
vxedit -g bdxapi02_dg0 rename rootvol2 bdxapi02
12. vi /etc/vfstab to modify the devices   ( edit the volume name )
Note:
Please edit the new volume name in /etc/vfstab entry
From:
/dev/vx/dsk/localdg/rootvol2    /dev/vx/rdsk/localdg/rootvol2   /zones/z1       ufs     2       yes     logging
To:
/dev/vx/dsk/localdg/z1  /dev/vx/rdsk/localdg/z1 /zones/z1       ufs     2       yes     logging

 13. vxassist -g localdg make localhome 5g       ( to create volumes )
vxprint -ht
14. mkfs -F vxfs /dev/vx/rdsk/localdg/localhome       ( to create file system for VXFS )
Note: Please create directory for mount point
Example:
#cd /zones/bdxapi01
#mkdir localhome
15. vi /etc/vfstab to add all new devices/mount points             ( Please edit the /etc/vfstab file below mentioned)
#
/dev/vx/dsk/localdg/localhome /dev/vx/rdsk/localdg/localhome /zones/bdxapi01/localhome vxfs - yes -
Example:
# zone01
/dev/vx/dsk/zone1_dg/patrol /dev/vx/rdsk/zone1_dg/patrol /zones/z1/patrol vxfs - yes -
/dev/vx/dsk/zone1_dg/controlm /dev/vx/rdsk/zone1_dg/controlm /zones/z1/controlm vxfs - yes -
/dev/vx/dsk/zone1_dg/localhome /dev/vx/rdsk/zone1_dg/localhome /zones/z1/localhome vxfs - yes -
/dev/vx/dsk/zone1_dg/sbt /dev/vx/rdsk/zone1_dg/sbt /zones/z1/sbt vxfs - yes -
/dev/vx/dsk/zone1_dg/orahome /dev/vx/rdsk/zone1_dg/orahome /zones/z1/orahome vxfs - yes -
/dev/vx/dsk/zone1_dg/sbtlog /dev/vx/rdsk/zone1_dg/sbtlog /zones/z1/sbtlog vxfs - yes -
15. mountall   ( it should be mounted with above mentioned mount point)
Note: Please start to create zone
#mkdir /zones/bdxapi01/boot
#chmod 700 /zones/bdxapi01/boot
#zonecfg -z <zone name >                    (Lets assume zone name as zone1)
#zonecfg:zone1>create   (for sparse root zone )
#zonecfg:zone1>create -b ( for whole root zone )
#zonecfg:zone1>set zonepath=/zones/bdxapi01/boot
#zonecfg:zone1>set autoboot=true
#zonecfg:zone1>add net  ( to add n/w interface )
#zonecfg:zone1:net>set physical=bge0
#zonecfg:zone1:net>set address=170.137.228.250
#zonecfg:zone1:net>end
#zonecfg:zone1>info
#zonecfg:zone1>verify
#zonecfg:zone1>commit
#zonecfg:zone1>add fs
#zonecfg:zone1:fs>set dir
#zonecfg:zone1:fs>set special=/zones/bdxapi01/localhome
#zonecfg:zone1:fs>set type=lofs
#zonecfg:zone1:fs>set options=rw
#zonecfg:zone1:fs>end
#zonecfg:zone1>info
#zonecfg:zone1>verify
#zonecfg:zone1>commit
#zonecfg:zone1>exit
Note: you can specify no. of file systems using above mentioned configuration
16. zoneadm list -vc   ( Now you can find zone1 is in installed state )
Note: Please create any file name under /var/tmp for backup purpose
17. zonecfg -z zone1 -f /var/tmp/z1
18. zoneadm -z zone1 install     (after execution of this command it should be in configured state )
19. zoneadm -z zone1 boot       (after execution of this command it should be in running state)
20. zlogin -C zone1        <-- If you want  set time zone, hostname, root-password, etc

 

Sunday, March 6, 2011

ZONE's in Solaris

Zone: A zone is an OS level virtualization introduced from Solaris 10 to give isolation & security to the applications on server. Main advantage is data center consolidation. We can combine multiple physical servers into one physical server and provide same environment for all the applications. From2008 release a new concept called "BRAND ZONE" has been introduced. This enables us to run Solaris 8/9/10 on Sparc servers and any Linux flavors on X86 servers. A maximum of8192 zones can be created inside one physical server independent of Hardware configuration.

PURPOSE: OS level virtualization.

TYPES: Global zone and Non global zone.

NON-Global zone: Sparc root, whole root and brand zone

FILES: /etc/vfstab , /etc/zones

PACKAGES: SUNWzoner

DAEMONS: zoneadm & zxched

COMMANDS: #zonecfg, #zoneadm & #zlogin

STATES: configured, installed, incomplete, ready, running & shutdown/halt

There are two types of zones - "global zones" and "non global zone"

Global zone: Login as first instance
Non Global zone: It is an virtualized OS in a server.

Features of Global zone: It is a default zone used for system wide configuration and control.
Zone id = 0. It provides a single bootable instance of Solaris environment.
Contains info of all the devices and full installation packages of Solaris 10 OS. Contains its own configuration :- hostname, IP address, User info.....etc
It is the only zone which is aware of Non Global Zone. It is the only zone from which all the Non Global Zone can be managed.

Features of Non Global Zone: It is created by global zone and is also managed by it. A non global zone is assigned a zone id by the system when it is booted. Whenever a Non global zone is rebooted the zone ID changes. Non global zone shares the kernell from Global zone.
A non global zone is not aware of the other non global zones and it cannot administer itself. It contains additional software which is used at installation time It contains a sub set of packages from Global zone which are required to boot and run the OS.

Zone root path:
1. Sparc root zone: In this instance only root "/" is copied and other files and shared. Minimum space required is 100mb.
2. Whole root zone: In this instance everything is copied. Minimum space required is 4Gb.

Sparce Root Zone: Sharing is optimized by implementing read only loop back filesystem from Global zone and only installs a subset of system root packages locally. Majority of Filesystem is share from Global zone. Minimum space requirement is 100 mb.

Whole Root Zone :  All required packages are copied to zone's private Filesystem and minimum size required is 4 gb.

Daemons in Zones : Two major daemons run - zoneadm & zsched


Zoneadm : This daemon starts when a zone needs to be managed. An instance of zoneadm daemon will be started for each zone. Hence it is not uncommon to have multiple instance running a single server.
This daemon is responsible for the following tasks.
- Allocates the zone id and starts Zsced process.
- Sets system wide resource controls.
- Plumbs the virtual network interface.
- Mounts the loopback Filesystem and shares the resources from Global Zone.

Zsced : Zone scheduler daemon is started by zoneadm and exists for each active zone. A zone is said to be active when it is in "ready", "running" or "shutdown" states. The job of this daemon is to keeptrack of kernell threats running within the zone.

Zone States: 


Configured: A zone in this state has completed configuration, confirmed storage and additional configuration must be done after initial reboot.

Incomplete: This state is shown during installation or un-installation process. After this task is completed the state changes to installed / uninstalled state.

Installed: Confirmed configuration state. #zoneadm command is used to verify that the zone is going to run in the specified environment. The base binaries required to run and boot the zone are copied from Global Zone to Non Global Zone. Virtual environment is not set at this stage.
Ready: Kernell creates Zsced process and virtual environment is set. Network interfaces are plumbed and Filesystem's are mount and Zone ID is assigned by the system.

Running: A zone enters this state when the first user process is created. This is a normal state of operational zone.

Shutdown: This is a transitional state, only visible when a process is being halted or cannot shutdown for any reason.

Security in Solaris

SERVER HARDENING:
This is a concept of making a server secure and to run 24/7 applications.
This is a post installation process, ies: After installing OS, based on the application need we harden the server.

The following tasks are performed :-
1. Removing unnecessary services.
2. Disable auto install scripts so that no user can install any applications.
3. Disable media drives (CD, DVD..etc) and USB.
4. Removing unnecessary user accounts.
5. Maintaining log for all services.
6. Giving permissions for all files and directories.
7. Manage space requirements.
8. Consolidate the server for better performance.

CONCEPTS USED:
- SET UID (User ID) & SET GID (Group ID)
- Stickybit
- ACL (Access control list)
- RBAC (Role based access)

FILES:
- /etc/default/login
- /etc/default/passwd
- /etc/security/policy.conf
- /etc/wtmpx
- /etc/utmpx
- /etc/adm/sulog
- /var/adm/loginlog
- /etc/nologin
- /etc/user_attr
- /etc/security/prof_attr
- /etc/security/auth_attr
- /etc/security/exec_attr

COMMANDS:

#roleadd
#rolemod
#roledel
#chmod
#chown
#newgrp
#useradd
#usermod
#su

Whenever a user logs in, permissions are verified based on UID. The OS also maintains effective UIS which represents the current environment of the user.

Denying root user login in Solaris

root is the superuser in unix and can do about everything. We need root rights to perform advanced administration in unix platforms and there are multiple users doing it. Anyone with root login can perform destructive steps (Like running "rm -rf /") which can go untraced on who did it! In order to avoid it, this step is done  and to ensures that no one log's in directly as root.

The best practice is to login as a user and perform switch user operation to root for administration rights.

Do the following steps
# vi /etc/default/login

* go to CONSOLE= /dev/console
* remove /dev/console

Save and exit

Now root cannot directly login to this system.


.

Steps to monitor failed logins in Solaris

Creating loginlog file
# touch /var/adm/loginlog


Changing permissions
# chmod 600 /var/adm/login.log


# vi /etc/default/logins
edit the following
RETRIES=3
Note: you can change the retries as per your requirement.

# vi /etc/security/policy.conf
edit the following
* LOCK_AFTER_RETRIES=NO (Change it to YES)

The failed login tries are logged in here
# cat /var/adm/default/login

Dynamic Multipathing with VXDMP

DMP (Dynamic Multipathing):-
This is a feature available in Vxvm to provide reliability, redundancy and availability of data. DMP is automatically configured when we install veritas. With DMP we can connect multiple paths to an multiported array.



Multiported disk arrays can be connected to the host system through multiple paths. Till veritas 3.2 we can connect a max of 32 paths to a single server. From 4.x onwards the count is unlimited.
When multi paths are connected DMP uses a mechanism that is specific to each supported disk array.
Veritas will automatically identify the available paths to a single physical storage with the help of world wide unique identifier and perform IO operation to the physical device without any interruption with the help of available paths.

DMP supports following types of array types

  1. Active/Active:  Allows server paths to be used concurrently for IO  operations. In this type of array DMP provides greater through put by balancing IO uniformally.  Across multiple paths. In this case if one path fails, IO operations are automatically moved to the next available paths.
  2. Active/Passive: This type allows access to disk or LUN’s via primary path which is active, if primary path fails it goes secondary path.

To administer dynamic multipathing we use the following commands:

# vxdmpadm: Lists the available paths.
Displays information about HBA controllers on the host. Displays info about enclosure’s. Gaters IO statistics for DMP mode.
Set the IO policy.
Set the partition size or rename the enclosure.

# vxdisk path (To the the list of controllers & enclosures)
# vxdmpadm list ctrl all | more (To set the sub paths in a enclosure)
# vxdmpadm getsubpaths ctrl=c1 (To get the info about enclosure)
# vxdmpadm listenclosure sena0
# fcinfo hba-port
# vxdmpadm getdumpnode enclosure=sena0

To gather the IO stats of DMP
# vsdmadm iostat start
# vsdmpadm iostat show all
# vxdmpadm iostat reset
# vxdmadm iostat stop

To display the IO policy

# vxdmpadm getattr enclosure SENA0 iopolicy
# vxdmadm setattr enclosure SENA0 iopolicy=round-robin

To display the partition size
# vxdmpadm getattr enclosure SENA0 partionsize

To rename enclosure
# vxdmpadm setattr enclosure SENA0 name=SENA0

# vxdiskadm 

Veritas Volume Manager - Daemon's

Daemon’s inside vxvm

  1. VXconfigd: This is a main configuration daemon inside veritas which maintains systemconfiguration in kernel and disk. If this daemon is stopped it will not disable any configured state loaded into this kernel.
This daemon has 3 states –
    1. Enable: Normal State
    2. Disable: Most operations cannot be used.
    3. Booted: Normal startup with the help of disk group.

  1. Vxrelogd: Monitors for failure events and responsible for replacing a failed disk.
  2. Vxcached: Maintains the buffer for output inside veritas.
  3. Vxiod: Known and Input/Output daemon which manages IO operations inside a veritas volume.
  4. Vxconfigbackupd: It is responsible for takin the backup of a disk group automatically.
  5. Vxnotifyd: It forwards the messages to the user to a file or to a specific location based on configuration

To get the version of veritas installed

# modinfo | grep –I vxvm
# pkginfo –l VRTSvxvm
# cd /etc/vx/bin (All veritas commands are stored in here)
# cd /opt/vrts/bin (Administration commands are stored in here)
# cd /etc/vx/license/lic (Veritas licenses are stored)
#vxlicrep (To get license info)

 To update license keys.

#  cd /etc/vx/lisence/lic
# rm –r *
# vx licinst
        Enter license key.
# vxdctl mode (To know the state of vxconfigd)
# vxdctl disable (Stops daemon)
# vxdctl enable ( Enables daemon)
# vxdctl mode



The volboot file also contains a list of disks to scan in search of the rootdg disk group. At least one disk in this list must be both readable and a part of the rootdgdisk group, or the Volume Manager will not be able to start up correctly.