Quantcast
Channel: Linux Toolkits
Viewing all 449 articles
Browse latest View live

Accurately Time Your Parallel Loops in OpenMP


Location of GPFS Client Log file

$
0
0
The location of the GPFS Log file is located at the /var/adm/ras/mmfs.log.latest. You can find a wealth of information of errors and information. When you are monitoring the errors in real time, you may want to use the tail -f to see the real time troubleshooting

# tail -f /var/adm/ras/mmfs.log.latest

Trying to allocate 1005 pages for VMLINUZ error when booting with RHEL or CentOS 6.5 disks

$
0
0
I was booting the RHEL 6.5 or CentOS 6.5 on a IBM PureFlex System and I have this error. This occurs when When installing Red Hat Enterprise Linux 6 from DVD media the installation will default to native Extensible Firmware Interface (EFI) mode boot. I do not have

According to IBM Website, 

The workaround is simply to install the operating system in the traditional legacy mode, since there is generally no reason to install in other than Legacy mode. The workaround is only necessary if the media you are booting defaults to EFI mode (DVD or EFI Preboot eXecution Environment (PXE)) otherwise a legacy installation (e.g. - traditional PXE) is the default and is unaffected by this issue.

To force a legacy installation of the operating system from the EFI bootable DVD media the user should:

    Press F12 key when the IBM splash screen is shown during system boot.
    Select Legacy Only option and press Enter.
    The operating system will boot and install in traditional legacy boot mode.

And the issue was resolved.

References:
  1. Red Hat Enterprise Linux 6 (RHEL6) native Extensible Firmware Interface (EFI) install is not supported with greater than 512 GB memory - IBM System x and BladeCenter
  2. Bug 691860 - UEFI version of ISO fails to boot when >4gig (since f14)

Security Issue: RHEL glibc based privilege escalation (CVE-2014-5119, Important)

$
0
0
Issue:
There is a flaw in glibc that can allow a local unprivileged user to gain root on Red Hat Enterprise Linux machines.


A public exploit has been released on August 25th. This issue is tracked as CVE-2014-5119 in the MITRE Common Vulnerabilities and Exposures (CVE) database. The issue can not be blocked by our security technologies (such as SELinux). This issue affects the version of glibc as shipped with Red Hat Enterprise Linux 5, 6 and 7.

Resolution:
Please update your glibc to the latest version. Check the errata RHSA-2014:1110-1 for the glibc that matches your operating system version.

Reference:
See this KCS article for more detail: https://access.redhat.com/solutions/1176253

WordPress 3.9.2 Security Release

$
0
0
WordPress 3.9.2 is now available as a security release for all previous versions. We strongly encourage you to update your sites immediately.

This release fixes a possible denial of service issue in PHP’s XML processing, reported by Nir Goldshlager of the Salesforce.com Product Security Team. It  was fixed by Michael Adams and Andrew Nacin of the WordPress security team and David Rothstein of the Drupal security team. This is the first time our two projects have coordinated joint security releases.

For more information, do take a look at http://wordpress.org/news/2014/08/wordpress-3-9-2/

Singaren - Launch of SLIX, the Singapore and the region's first 100Gbps High Speed R&E Network, 28 Aug 2014

$
0
0


SINGAPORE, 28 August 2014 – Singapore Advanced Research and Education Network (SingAREN) announced today the launch of SingAREN-Lightwave Internet Exchange (SLIX),the first 100Gbps community network to be set up in the Southeast Asia region.

With SLIX, Singapore’s Research and Education (R&E) community will gain seamless access to a super high speed network with a hundred times more capacity than before; and enjoy bandwidth fully dedicated to their use. Built on an optical fibre core comprising dark fibres, SLIX allows resiliency, future capacity upgrade, and technology-proof network connectivity.
The new network also opens up new possibilities as a test-bed, extending database mirroring services, bilateral disaster recovery, high performance computing federation and shared services, high volume peering for content data networks and other value-adding services to the R&E community. In addition, SLIX will also enable research organisations to test different protocols for interconnections such as the Infiniband; and optical network researchers to carry out their experiments.

“SingAREN is proud to be the first to launch a 100 Gbps research and education network in the region. By increasing the network speed by ten-fold and with our suite of value-added services, SingAREN aims to facilitate collaborations amongst our local research organisations and with their international counterparts,” said A/Prof Francis Lee Bu Sung, President of SingAREN. “We would like to thank A*STAR, NTU and NUS for working closely with us to realise this network.”

Funded by SingAREN and the National Research Foundation (NRF), SLIX is a collaboration and a network built between SingAREN, the Agency for Science, Technology and Research (A*STAR), the Nanyang Technological University (NTU) and the National University of Singapore (NUS).

SingAREN selected 3D Networks to build the first 100 Gbps research and education network in the region. 3D Networks has deployed a flexible and programmable Packet Optical Platform meeting the advanced requirements of global research collaborators, and capable of scaling up to 400Gbps and beyond. 3D Networks built the DWDM network with the Ciena (NYSE: CIEN)
6500 Converged Packet Optical solution, and with Ciena’s Network Operations Centre and Network Transformation Solutions team providing management and monitoring of the network. The solution is supplemented with Brocade’s Open Flow enabled equipment.
Bluetel Networks is the fibre cable provider for SLIX.


The Business Times:
The Straits Times:

Lianhe Zaobao:

CIO.com (Australia)

Computerworld Singapore
xinmsn

____________________________________________________________________________



Error when sourcing or using compilervars.csh/ippvars.csh - arch: Undefined variable

$
0
0
If you have the error when sourcing or using compilervars.csh/ippvars.csh. The errors are as followed:
$ cd /opt/intel/composer_xe_2013_sp1.2.144/bin
$ source ./compilervars.csh intel64
arch: Undefined variable.

According to the website, Error when using compilervars.csh/ippvars.csh - arch: Undefined variable. from Intel

Problem Description:

A defect exists in the Intel® Integrated Performance Primitives (IPP) 8.1 Initial release in the ippvars.csh file distributed for Linux* (found under: /opt/intel/composer_xe_2013_sp1/ipp).

The IPP 8.1 release (Package ID: l_ipp_8.1.0.144) is available as a stand-alone download or bundled with the Intel® Composer XE 2013 SP1 Update 2 release (Package id: l_ccompxe_2013_sp1.2.144) for customers with valid licenses from the Intel Registration Center.

The defect is caused by improper initialization an internal variable used within the script that leads to the error “arch: Undefined variable.” when the script is sourced directly or indirectly via the compilersvars.csh script (found under: /opt/intel/composer_xe_2013_sp1/bin).


Resolution:
This defect is fixed in the Intel® C++ Composer XE 2013 SP1 Update 3 Release (Package id: l_ccompxe_2013.1.3.174 - Version 14.0.3.174 Build 20140422) now available from our Intel Registration Center.

In lieu of a permanent fix, users (or sys-admins) with appropriate root privileges can edit the ippvars.csh file to insert ONLY the new line 37 as noted in the code snippet below (ahead of line 38 which is the original line 37) to set the variable arch to the value of the first incoming argument (e.g. $1):
     37    set arch="$1"
     38    if ( "$1" == "ia32_intel64" ) then
     39     setenv arch intel64
     40    endif

Tracking NetApp Cluster-Mode Performance

$
0
0
To track NetApp Storage on Cluster Performance, do use the statistics "statistics show-periodic"
netapp-cluster1::> statistics show-periodic
cluster:summary: cluster.cluster: 9/9/2014 09:33:29
cpu total data data data cluster cluster cluster disk disk
busy ops nfs-ops cifs-ops busy recv sent busy recv sent read write
---- -------- -------- -------- ---- -------- -------- ------- -------- -------- -------- --------
5% 303 303 0 2% 4.86MB 223KB 0% 1.16MB 1.17MB 685KB 571KB
5% 312 312 0 3% 8.27MB 359KB 0% 1.11MB 1001KB 679KB 39.4KB
8% 300 300 0 2% 7.29MB 495KB 0% 2.87MB 3.30MB 2.66MB 59.1KB
6% 158 158 0 1% 3.53MB 168KB 0% 2.16MB 1.51MB 2.71MB 11.1MB
5% 184 184 0 2% 4.48MB 1.22MB 0% 1.99MB 1.97MB 1.21MB 10.9MB
5% 213 213 0 1% 2.82MB 222KB 0% 902KB 749KB 240KB 671KB
3% 144 144 0 1% 2.32MB 762KB 0% 559KB 685KB 96.6KB 15.8KB
4% 199 199 0 1% 3.73MB 881KB 0% 796KB 715KB 390KB 39.6KB
7% 164 164 0 1% 4.49MB 365KB 0% 2.34MB 2.43MB 2.52MB 8.33MB
7% 115 115 0 2% 4.07MB 154KB 0% 1.23MB 1.25MB 2.41MB 9.80MB
3% 224 224 0 1% 2.72MB 163KB 0% 1.80MB 721KB 407KB 996KB
4% 220 220 0 1% 4.38MB 1.32MB 0% 451KB 1.54MB 199KB 110KB
5% 124 124 0 1% 2.97MB 157KB 0% 315KB 273KB 251KB 15.8KB
7% 153 153 0 0% 1.76MB 139KB 0% 220KB 268KB 2.54MB 1.28MB
4% 120 120 0 0% 1.30MB 80.4KB 0% 417KB 325KB 2.86MB 13.9MB
cluster:summary: cluster.cluster: 9/9/2014 09:34:01
cpu total data data data cluster cluster cluster disk disk
busy ops nfs-ops cifs-ops busy recv sent busy recv sent read write
---- -------- -------- -------- ---- -------- -------- ------- -------- -------- -------- --------
Minimums:
3% 115 115 0 0% 1.30MB 80.4KB 0% 220KB 268KB 96.6KB 15.8KB
Averages for 15 samples:
5% 195 195 0 1% 3.93MB 451KB 0% 1.22MB 1.19MB 1.32MB 3.84MB
Maximums:
8% 312 312 0 3% 8.27MB 1.32MB 0% 2.87MB 3.30MB 2.86MB 13.9MB

Accessing RAID Configuration in IBM x3650M2 or IBM x3550M2

$
0
0
I was trying to locate the RAID configuration in IBM x3650M2 or IBM x3550M2, In the older configuration, there is a LSI configuration utility where when you press Ctrl-C and you get to the WebBIOS

To locate the LSI configuration,
  1. Boot to the BIOS Setup
  2. System Settings
  3. Adapter and UEFI Drivers
  4. List All Drivers and Adapter
  5. LSI (Hit Enter and you will enter the WebBIOS)

Creating a RAM Disk on CentOS 6

$
0
0
Do take a look at clear and easy-to-understand article on the Difference between ramfs and tmpfs and Create a Ram Disk in Linux for more detailed information. The information of this blog is taken from Create a Ram Disk in Linux

There are many reasons to create a RAM Disk. One reason is to have a isolated latency test or throughput test between interconnect, but discounting the effects of the spinning disk I/O that might be the bottleneck to the test. Another case is to store temp files which require very fast I/O. Nothing beats memory.

Step 1: Check how much RAM you have. Display it in GB
# free -g
[root@n01 ~]# free -m
             total       used       free     shared    buffers     cached
Mem:         50276       1219      49056          0         83        555
-/+ buffers/cache:        580      49695
Swap:        25207          0      25207
You can also display with -m (MB) or -k (KB)

Step 2: Create and Mount a RAM Disk
# mkdir /mnt/ramdisk
# mount -t tmpfs -o size=16g tmpfs /mnt/ramdisk

Step 3: If you wish to create automatic mount, do place it at /etc/fstab
tmpfs       /mnt/ramdisk tmpfs   nodev,nosuid,noexec,nodiratime,size=16g   0 0 

Adding SVG MIME Type to Apache on CentOS

$
0
0
What is MIME?

According to www.w3.org/services/svg-server

MIME Types (sometimes referred to as "Internet media types") are the primary method to indicate the type of resources delivered via MIME-aware protocols such as HTTP and email. User agents (such as browsers) use media types to determine whether that user agent supports that specific format, and how the content should be processed. When an SVG document is not served with the correct MIME Type in the Content-Type header, it might not work as intended by the author; for example, a browser might render the SVG document as plain text or provide a "save-as" dialog instead of rendering the image.

To add SVG MIME as list of supported MIME Type, simply add these lines to your /etc/httpd/conf/httpd.conf. I have placed it at around line 786

#
# AddType allows you to add to or override the MIME configuration
# file mime.types for specific file types.
#
#AddType application/x-tar .tgz
AddType image/svg+xml svg svgz
AddEncoding gzip svg


Red Hat Video and Articles on SystemD for Red Hat Enterprise Linux 7

$
0
0
Systemd, now available in Red Hat Enterprise Linux 7, offers you shorter system startup, refined control for process startup and management, and enhanced logging through journald. Learn more about systemd and how to get started.

Do take a look at the video clips and articles offered by Red Hat Enterprise Linux. All the information can be found at Starting with systemd


Systemd Startup
Working with Systemd targets
Enabling services at runtime
Converting init scripts to systemd units
Managing services with systemd
Shutting down and hibernating the system
Controlling Systems on a remote system

and more.......

Installing dokuwiki on CentOS 6

$
0
0
This writeup is a modification from Installing dokuwiki on CentOS  Step 1: Get the latest dokuwiki from http://download.dokuwiki.org/
# wget http://download.dokuwiki.org/src/dokuwiki/dokuwiki-stable.tgz
# tar -xzvf dokuwiki-stable.tgz
Step 2: Move dokuwiki files to apache directory
# mv dokuwiki-stable /var/www/html/docuwiki
Step 3: Set Ownership and Permission for dokuwiki
# chown -R apache:root /var/www/html/dokuwiki
# chmod -R 664 /var/www/html/dokuwiki/
# find /var/www/html/dokuwiki/ -type d -exec chmod 775 {} \;
Step 4: Continue the installation http://192.168.1.1/docuwiki/install.php Ignore the security warning, we can only move the data directory after installing. fill out form and click save Step 5: Delete install.php for security
# rm /var/www/html/dokuwiki/install.php
Step 6: Create and move data, bin (CLI) and cond directories out of apache directories for security Assuming apache does not access /var/www, only /var/www/html and /var/cgi-bin secure dokuwiki (or use different directory):
# mkdir /var/www/dokudata
# mv /var/www/html/dokuwiki/data/ /var/www/dokudata/
# mv /var/www/html/dokuwiki/conf/ /var/www/dokudata/
# mv /var/www/html/dokuwiki/bin/ /var/www/dokudata/
Step 7: Update dokuwiki where the conf directory
# vim /var/www/html/dokuwiki/inc/preload.php
<?php
// DO NOT use a closing php tag. This causes a problem with the feeds,
// among other things. For more information on this issue, please see:
// http://www.dokuwiki.org/devel:coding_style#php_closing_tags

define('DOKU_CONF','/var/www/dokudata/conf/');
* Note the comments why there is no closing php Step 8: Update dokuwiki where the data directory is
# vim /var/www/dokudata/conf/local.php
$conf['savedir'] = '/var/www/dokudata/data/';
Step 9: Set permission for dokuwiki again for the new directory with same permissions
# chown -R apache:root /var/www/html/dokuwiki
# chmod -R 664 /var/www/html/dokuwiki/
# find /var/www/html/dokuwiki/ -type d -exec chmod 775 {} \;

# chown -R apache:root /var/www/dokudata
# chmod -R 664 /var/www/dokudata/
# find /var/www/dokudata/ -type d -exec chmod 775 {} \;
  Step 10: Go to wiki http://192.168.1.1/docuwiki/install.php

Centrify Error - Not authenticated: while getting service credentials: No credentials found with supported encryption

$
0
0

I was not able to use authenticate with my password when I tried to logon with Putty. A closer look at the log file shows. Only the local account root was able to logon
Sep 17 12:00:00 node1 sshd[4725]: error: PAM: 
Authentication failure for user2 from 192.168.1.5
Sep 17 12:00:01 node1 adclient[7052]: WARN  audit User 'user2' not authenticated:
while getting service credentials:
No credentials found with supported encryption

The solution was very simple. Just restart the /etc/init.d/centrifydc and /etc/init.d/centrify-sshd
# service /etc/init.d/centrifydc restart
# service /etc/init.d/centrify-sshd restart


haproxy unable to bind socket

$
0
0

After confguring haproxy and when you start the haproxy services as you can find in
Install and Configure HAProxy on CentOS/RHEL 5/6, you might encounter the following error.
Starting haproxy: [WARNING] 265/233231 (20638) : config : log format ignored for proxy 'load-balancer-node' since it has no log address.
[ALERT] 265/233231 (20638) : Starting proxy load-balancer-node: cannot bind socket

To check whether what other services are listening to the port, do the following
# netstat -anop | grep ":3389"
tcp 0 0 0.0.0.0:3389 0.0.0.0:* LISTEN 20606/xrdp off (0.00/0/0)

Stop the listening services
# service xrdp stop

Start the haproxy service
# service haproxy


You should not encounter any error now.                                                           [  OK  ]

Critical Security Vulnerability: Bash Code Injection Vulnerability, aka Shellshock (CVE-2014-6271)

$
0
0
A critical vulnerability in the Bourne again shell commonly known as Bash that is  present in most Linux and UNIX distributions as well as Apple’s Mac OS X, had been found and administrators are being urged to patch and remediate immediately. Do read https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/

The flaw discovered allows an attacker to remotely attach a malicious executable to a variable that is executed when Bash is invoked. 

Operating systems with updates include:
CentOS
Debian
Redhat
More info: https://access.redhat.com/articles/1200223

Proof-of-concept code for exploiting Bash-using CGI scripts to run code with the same privileges as the web server is already floating around the web. A simple Wget fetch can trigger the bug on a vulnerable system.

http://www.theregister.co.uk/2014/09/24/bash_shell_vuln/
http://www.wordfence.com/blog/2014/09/major-bash-vulnerability-disclosed-may-affect-a-large-number-of-websites-and-web-apps/

Diagnostic Steps
To test if your version of Bash is vulnerable to this issue, run the following command:
$ env x='() { :;}; echo vulnerable'  bash -c "echo this is a test"
If the output of the above command looks as follows:
vulnerable
this is a test

If you are using a vulnerable version of Bash. The patch used to fix this issue ensures that no code is allowed after the end of a Bash function. Thus, if you run the above example with the patched version of Bash, you should get an output similar to:
$ env x='() { :;}; echo vulnerable'  bash -c "echo this is a test"
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
this is a test

Additional Information - Critical Security Vulnerability: Bash Code Injection Vulnerability, aka Shellshock (CVE-2014-6271)

$
0
0
Security advisories had been released by the key vendors with regards to the BASH vulnerabilities affecting  their products.
Kindly refer to the link for more detailed information

i). Cisco
http://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20140926-bash


ii) Oracle
http://www.oracle.com/technetwork/topics/security/alert-cve-2014-7169-2303276.html

iii) Juniper
http://kb.juniper.net/InfoCenter/index?page=content&id=JSA10648

iv)    Apple
No official  patch release – Coming soon
http://www.imore.com/apple-working-quickly-protect-os-x-against-shellshock-exploit

Listing and cleaning out old xauth entries on CentOS 5

$
0
0
When we do a X-forwarding,

$ ssh -X somehost.com

and you if you do a xauth list to check on your X-forwarding session, you can see an xauth entry something like:

$ xauth list

..... 
current-local-server:17  MIT-MAGIC-COOKIE-1  395f7b22fb6087a29b5fb1c9e37577c0
.....

Somehow after exiting the X-forwarding, somehow the session is still found in the xauth list

To clear the xauth entries, you can take a look at Clean up old xauth entries

In that blog entries, the author,
$ xauth list | cut -f1 -d\  | xargs -i xauth remove {}

Compiling VASP 5.3.5 with OpenMPI 1.6.5 and Intel 12.1.5

Unable to open socket connection to xcatd daemon on localhost:3001.

$
0
0
When I did a tabedit site to check my configuration, I encountered this error
Unable to open socket connection to xcatd daemon on localhost:3001.
Verify that the xcatd daemon is running and that your SSL setup is correct.

The solution to this error is quite easy. You just need to check your /etc/hosts. Have you # out your. In other words, make sure you have a line like this in your /etc/hosts
127.0.0.1       localhost.localdomain                   localhost

That's it.....

Viewing all 449 articles
Browse latest View live