Monday, September 30, 2013

How do I disable IPv6?

Upstream employee Daniel Walsh recommends not disabling the ipv6 module, as that can cause issues with SELinux and other components, but adding the following to /etc/sysctl.conf:
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
To disable in the running system:
echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6
echo 1 > /proc/sys/net/ipv6/conf/default/disable_ipv6
or
sysctl -w net.ipv6.conf.all.disable_ipv6=1
sysctl -w net.ipv6.conf.default.disable_ipv6=1
If problems with X forwarding are encountered on systems with IPv6 disabled, edit /etc/ssh/sshd_config and make either of the following changes:
(1) Change the line
#AddressFamily any
to
AddressFamily inet
(inet is ipv4 only; inet6 is ipv6 only)
or
(2) Remove the hash mark (#) in front of the line
#ListenAddress 0.0.0.0

Then restart ssh.

About Network Interfaces in EL6

Each physical and virtual network device on an EL6 Linux system has an associated configuration file named ifcfg-interface in the /etc/sysconfig/network-scripts directory, where interface is the name of the interface. For example:
# cd /etc/sysconfig/network-scripts
# ls ifcfg-*
ifcfg-eth0  ifcfg-eth1  ifcfg-lo
In this example, there are two configuration files for Ethernet interfaces, ifcfg-eth0 and ifcfg-eth1, and one for the loopback interface, ifcfg-lo. The system reads the configuration files at boot time to configure the network interfaces.
The following are sample entries from an ifcfg-eth0 file for a network interface that obtains its IP address using the Dynamic Host Configuration Protocol (DHCP):
DEVICE="eth0"
NM_CONTROLLED="yes"
ONBOOT=yes
USERCTL=no
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03
HWADDR=08:00:27:16:C3:33
PEERDNS=yes
PEERROUTES=yes
If the interface is configured with a static IP address, the file contains entries such as the following:
DEVICE="eth0"
NM_CONTROLLED="yes"
ONBOOT=yes
USERCTL=no
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03
HWADDR=08:00:27:16:C3:33
IPADDR=192.168.1.101
NETMASK=255.255.255.0
BROADCAST=192.168.1.255
PEERDNS=yes
PEERROUTES=yes
The following configuration parameters are typically used in interface configuration files:
BOOTPROTO
How the interface obtains its IP address:
bootp
Bootstrap Protocol (BOOTP).
dhcp
Dynamic Host Configuration Protocol (DHCP).
none
Statically configured IP address.
BROADCAST
IPv4 broadcast address.
DEFROUTE
Whether this interface is the default route.
DEVICE
Name of the physical network interface device (or a PPP logical device).
HWADDR
Media access control (MAC) address of an Ethernet device.
IPADDR
IPv4 address of the interface.
IPV4_FAILURE_FATAL
Whether the device is disabled if IPv4 configuration fails.
IPV6_FAILURE_FATAL
Whether the device is disabled if IPv6 configuration fails.
IPV6ADDR
IPv6 address of the interface in CIDR notation. For example: IPV6ADDR="2001:db8:1e11:115b::1/32"
IPV6INIT
Whether to enable IPv6 for the interface.
MASTER
Specifies the name of the master bonded interface, of which this interface is slave.
NAME
Name of the interface as displayed in the Network Connections GUI.
NETMASK
IPv4 network mask of the interface.
NETWORK
IPV4 address of the network.
NM_CONTROLLED
Whether the network interface device is controlled by the network management daemon, NetworkManager.
ONBOOT
Whether the interface is activated at boot time.
PEERDNS
Whether the /etc/resolv.conf file used for DNS resolution contains information obtained from the DHCP server.
PEERROUTES
Whether the information for the routing table entry that defines the default gateway for the interface is obtained from the DHCP server.
SLAVE
Specifies that this interface is a component of a bonded interface.
TYPE
Interface type.
USERCTL
Whether users other than root can control the state of this interface.
UUID
Universally unique identifier for the network interface device.

Saturday, September 21, 2013

How to configure an authoritative time server in Windows Server

To configure an internal time server to synchronize with an external time source, follow these steps: 

Open registry editor 


1. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters\Type = NTP


2. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config\AnnounceFlags = 5


3. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpServer\Enabled = 1


4. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters\NtpServer = time1.nimt.or.th,0x1 time2.nimt.or.th,0x1


5. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient\SpecialPollInterval = 900


6. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config\MaxPosPhaseCorrection = 72000 (decimal)


7. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config\MaxNegPhaseCorrection = 72000 (decimal)


Close Registry Editor.


At the command prompt, type the following command to restart the Windows Time service, and then press Enter: 


net stop w32time && net start w32time

The Performance Overview tab fails to display into vSphere Client and VCSA 5.1

Problem:
I cannot view the Performance Overview tab when connecting to vCenter Server Appliance using the vSphere Client 5.1 on MS Windows XP / 2003.

Variations of the error message in the Performance Overview tab:
- This program cannot display the webpage - more often seen
- Navigation to the webpage was cancelled - rarely viewed 


Resolution:
Method 1
Usually the problem is ciphers attribute in the VCSA Tomcat config file - server.xml

1. Log in to console of VCSA as the root user
2. Locate file under /usr/lib/vmware-vpx/tomcat/conf/
3. Make file backup (for your peace of mind)
4. Edit file - Find and replace "ciphers" attribute in the server.xml with following value:

ciphers="SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA"

4. Restart the vmware-vpxd service:

via console
vcsa-lab:~# service vmware-vpxd restart

Posted by Milen Lyutskanov

Method 2

For Windows 2003, try to download and install this hotfix 

http://hotfixv4.microsoft.com/Windows%20Server%202003/sp3/Fix192447/3790/free/351403_ENU_x64_zip.exe 

on computer that run vSphere client.

Cause:
Windows XP/2003 doesn't supports high cipher strength. 

Refer to http://support.microsoft.com/kb/948963/en-us

Modified by Kenji


Saturday, August 10, 2013

How to Configure Red Hat Cluster Services Fencing with iLO 3

Information
Environment :

Red Hat Cluster Suite 4+
Red Hat Enterprise Linux 5 Advanced Platform (Clustering)
Red Hat Enterprise Linux Server 6 (with the High Availability Add on)

Description :
Support for the iLO3 fence device has been added with the release of cman 2.0.115-34.el5_5.4 through erratum RHEA-2010-0876 which provides support for iLO3 via fence_ipmilan.
The iLO3 firmware should be a minimum of 1.15 as provided by HP.

Details
Resolution :
On both cluster nodes, install the following OpenIPMI packages used for fencing:
$ yum install OpenIPMI OpenIPMI-tools

Stop and disable the 'acpid' daemon:
$ service acpid stop; chkconfig acpid off

Start ipmi service on all cluster nodes:
$ service ipmi start; chkconfig ipmi on

Test ipmitool interaction with iLO3:
$ ipmitool -H -I lanplus -U -P chassis power status

The desired output is:
Chassis Power is on

Edit the /etc/cluster/cluster.conf to add the fence device:

<?xml version="1.0"?> <cluster alias="rh5nodesThree" config_version="32" name="rh5nodesThree"> <fence_daemon clean_start="0" post_fail_delay="1" post_join_delay="3"/> <clusternodes> <clusternode name="rh5node1.examplerh.com" nodeid="1" votes="1"> <fence> <method name="1"> <device domain="rh5node1" name="ilo3_node1"/> </method> </fence> </clusternode> <clusternode name="rh5node2.examplerh.com" nodeid="2" votes="1"> <fence> <method name="1"> <device domain="rh5node2" name="ilo3_node2"/> </method> </fence> </clusternode> <clusternode name="rh5node3.examplerh.com" nodeid="3" votes="1"> <fence> <method name="1"> <device domain="rh5node3" name="ilo3_node3"/> </method> </fence> </clusternode> </clusternodes> <cman expected_votes="3"> <multicast addr="229.5.1.1"/> </cman> <fencedevices> <fencedevice agent="fence_ipmilan" power_wait="10" ipaddr="XX.XX.XX.XX" lanplus="1" login="username" name="ilo3_node1" passwd="password"/> <fencedevice agent="fence_ipmilan" power_wait="10" ipaddr="XX.XX.XX.XX" lanplus="1" login="username" name="ilo3_node2" passwd="password"/> <fencedevice agent="fence_ipmilan" power_wait="10" ipaddr="XX.XX.XX.XX" lanplus="1" login="username" name="ilo3_node3" passwd="password"/> </fencedevices> <rm> <failoverdomains/> <resources/> </rm> </cluster>

Test that fencing is successful. From node1 attempt to fence node2 as follows:
$ fence_node node2