Category Archives: LInux Based

How to get citrix receiver in ubuntu

Below is a small document help to install citrix receiver.


How to update certificate to use the SHA-2 hashing algorithm

Followed by National Institute of Standards and Technology (NIST) recommendations certificates encrypted with the Secure Hash Algorithm-1 (SHA-1) algorithm will stop supporting after 2017.Accourding to experts, using the SHA-1 hashing algorithm in digital certificates could allow an attacker to spoof content, perform phishing attacks, or perform man-in-the-middle attacks.SHA-1(more than 98%) is currently the most widely used digest algorithm.
Microsoft is announcing a policy change to the Microsoft Root Certificate Program and Windows will stop accepting SHA-1 end-entity certificates by January 1, 2017, and will stop accepting SHA-1 code signing certificates without timestamps after January 1, 2016.

Google Chrome has started warning end users when they connect to a secure website using SSL certificates encrypted with the SHA-1 algorithm (read google blog

Read more how to do

How to update certificate to use the SHA

Adding certificates using commands

cat foo.crt > /etc/pki/certs/ca.crt

Enable the dynamic CA configuration feature:
update-ca-trust enable

copy and update store
sudo cp foo.crt /usr/local/share/ca-certificates/foo.crt

Update the CA store

sudo update-ca-certificates

Certutil –addstore –f “TrustedPublisher” <pathtocertificatefile>
Certutil –addstore –f “CA” <pathtocertificatefile> for intermediate
certutil -addstore “Root” “c:\cacert.cer” for root
certutil -addstore “MY” “<pathtocertificatefile>” for local/personal
certutil -addstore “spc” “<pathtocertificatefile>” for software publisher certificates
certutil -addstore “user_created_store “<pathtocertificatefile>” for name of a user-created certificate store

AddressBook -> specifies “Other People” store
Trust -> specifies “Enterprise Trust” store
TrustedPublisher -> specifies “Trusted Publishers” store

certutil –f –p [certificate_password] –importpfx C:\[certificate_path_and_name].pfx

Installed applications unix command

Show all installed packages or software in Linux, FreeBSD, OpenBSD
Red Hat/Fedora Core/CentOS Linux

Type the following command to get list of all installed software
# rpm -qa | less or yum list installed
Debian Linux

Type the following command to get list of all installed software:
# dpkg –get-selections

Ubuntu Linux

Type the following command to get list of all installed software:
# sudo dpkg –get-selections


Type the following command to get list of all installed software:
# pkg_info | less
# pkg_info apache

Use pkg_version command to summarizes the versions of all installed packages:
# pkg_version | less
# pkg_version | grep ‘lsof’


OpenBSD also use pkg_info command to display list of all installed packages or software:
# pkg_info | less
# pkg_info apache

apt-cache policy <package-name> | grep Installed: | cut -d: -f2

Putty through proxy/gateway

1.Set up your configuration( IP address or fQDN name of server)

2.Go to connection > proxy
Proxy type as Local
Proxy host as your gateway/proxy
Telnet or local proxy command to run:
plink username@%proxyhost -nc %host:%port \n
3.Go to SSH > auth >allow agent forwarding
4.Set your a keys from SSH > auth > keyfile
Kitty is another software same as putty

Telnet or local proxy command to run:
klink.exe %user@%proxyhost -nc %host:%port \n

OpenSSL high severity vulnerability released on 9th July 2015

The OpenSSL project described as “high severity” a vulnerability CVE-2015-1793 released on 9th July 2015.OpenSSL project categorizes high severity include server denial-of-service, a significant leak of
server memory and remote code execution According to the announcement, the July 9 release will address OpenSSL versions 1.0.2d and 1.0.1p. The flaw does not affect versions 1.0.0 or 0.9.8.

Security experts have speculated that this high severity bug could be another Heartbleed(discovered in April 2004, was a bug in an earlier version of OpenSSL that allowed hackers to read sensitive contents of
victims’ encrypted data, including credit card details and even steal crypto SSL keys from Internet servers or client software.) or POODLE bug(Padding Oracle On Downgraded Legacy Encryption — was unearthed in the decade old but widely used SSL 3.0 cryptographic protocol that allowed attackers to decrypt the contents of encrypted connections.) or Freak (a flaw revealed earlier this month that can allow an attacker to initiate a weaker type of encrypted connection that can be compromised more easily.)that were considered to be the worst TLS/SSL vulnerabilities still believed to be affecting websites on Internet today.

The latest versions also patch Logjam (CVE-2015-4000), a TLS bug that can be exploited through man-in-the-middle (MitM) attacks to downgrade connections to 512-bit export-grade cryptography. The vulnerability allows an attacker to read and alter encrypted data.

Reference for poodle attack

Nginx using as load balancer

Basic HTTP server features

Serving static and index files, autoindexing; open file descriptor cache;
Accelerated reverse proxying with caching; simple load balancing and fault tolerance;
Accelerated support with caching of FastCGI, uwsgi, SCGI, and memcached servers; simple load balancing and fault tolerance;
Modular architecture. Filters include gzipping, byte ranges, chunked responses, XSLT, SSI, and image transformation filter. Multiple SSI inclusions within a single page can be processed in parallel if they are handled by proxied or FastCGI/uwsgi/SCGI servers;
SSL and TLS SNI support.
Other HTTP server features

Name-based and IP-based virtual servers;
Keep-alive and pipelined connections support;
Flexible configuration;
Reconfiguration and upgrade of an executable without interruption of the client servicing;
Access log formats, buffered log writing, fast log rotation, and syslog logging;
3xx-5xx error codes redirection;
The rewrite module: URI changing using regular expressions;
Executing different functions depending on the client address;
Access control based on client IP address, by password (HTTP Basic authentication) and by the result of subrequest;
Validation of HTTP referer;
The PUT, DELETE, MKCOL, COPY, and MOVE methods;
FLV and MP4 streaming;
Response rate limiting;
Limiting the number of simultaneous connections or requests coming from one address;
Embedded Perl.

Load balancing methods

The following load balancing mechanisms (or methods) are supported in nginx:

round-robin — requests to the application servers are distributed in a round-robin fashion,
least-connected — next request is assigned to the server with the least number of active connections,
ip-hash — a hash-function is used to determine what server should be selected for the next request (based on the client’s IP address).
Default load balancing configuration

The simplest configuration for load balancing with nginx may look like the following:

http {
upstream myapp1 {

server {
listen 80;

location / {
proxy_pass http://myapp1;
In the example above, there are 3 instances of the same application running on srv1-srv3. When the load balancing method is not specifically configured, it defaults to round-robin. All requests are proxied to the server group myapp1, and nginx applies HTTP load balancing to distribute the requests.

Reverse proxy implementation in nginx includes load balancing for HTTP, HTTPS, FastCGI, uwsgi, SCGI, and memcached.

To configure load balancing for HTTPS instead of HTTP, just use “https” as the protocol.

When setting up load balancing for FastCGI, uwsgi, SCGI, or memcached, use fastcgi_pass, uwsgi_pass, scgi_pass, and memcached_pass directives respectively.

Least connected load balancing

Another load balancing discipline is least-connected. Least-connected allows controlling the load on application instances more fairly in a situation when some of the requests take longer to complete.

With the least-connected load balancing, nginx will try not to overload a busy application server with excessive requests, distributing the new requests to a less busy server instead.

Least-connected load balancing in nginx is activated when the least_conn directive is used as part of the server group configuration:

upstream myapp1 {
Session persistence

Please note that with round-robin or least-connected load balancing, each subsequent client’s request can be potentially distributed to a different server. There is no guarantee that the same client will be always directed to the same server.

If there is the need to tie a client to a particular application server — in other words, make the client’s session “sticky” or “persistent” in terms of always trying to select a particular server — the ip-hash load balancing mechanism can be used.

With ip-hash, the client’s IP address is used as a hashing key to determine what server in a server group should be selected for the client’s requests. This method ensures that the requests from the same client will always be directed to the same server except when this server is unavailable.

To configure ip-hash load balancing, just add the ip_hash directive to the server (upstream) group configuration:

upstream myapp1 {

Weighted load balancing

It is also possible to influence nginx load balancing algorithms even further by using server weights.

In the examples above, the server weights are not configured which means that all specified servers are treated as equally qualified for a particular load balancing method.

With the round-robin in particular it also means a more or less equal distribution of requests across the servers — provided there are enough requests, and when the requests are processed in a uniform manner and completed fast enough.

When the weight parameter is specified for a server, the weight is accounted as part of the load balancing decision.

upstream myapp1 {
server weight=3;

With this configuration, every 5 new requests will be distributed across the application instances as the following: 3 requests will be directed to srv1, one request will go to srv2, and another one — to srv3.

It is similarly possible to use weights with the least-connected and ip-hash load balancing in the recent versions of nginx.

Health checks

Reverse proxy implementation in nginx includes in-band (or passive) server health checks. If the response from a particular server fails with an error, nginx will mark this server as failed, and will try to avoid selecting this server for subsequent inbound requests for a while.

The max_fails directive sets the number of consecutive unsuccessful attempts to communicate with the server that should happen during fail_timeout. By default, max_fails is set to 1. When it is set to 0, health checks are disabled for this server. The fail_timeout parameter also defines how long the server will be marked as failed. After fail_timeout interval following the server failure, nginx will start to gracefully probe the server with the live client’s requests. If the probes have been successful, the server is marked as a live one.

Other posts


nginx [engine x] is an HTTP and reverse proxy server, as well as a mail proxy server, written by Igor Sysoev. According to Netcraft, nginx served or proxied 21.21% busiest sites in February 2015.Netflix,, FastMailare used by ngix for eg.

Apache powers more websites,because it has been available for so many years and Microsoft IIS is in next place.Using apache slows down under heavy load, because of the need to spawn new processes,it also creates new threads that must compete with others for access to memory and CPU.

Installation and configurations


Display directory size in Linux GUI

Below link describing few tools in Linux to show directory or file system size in GUI .

How to display directory size in Linux GUI

Read earlier post for windows directory size analyzer tools

Linux platform that can lead to privilege escalation “Grinch” attacks

Grinch could affect all Linux systems(Belives not a severe as BASH or Shellshock), including Web servers and mobile devices. The security hole is actually a common configuration issue related to Polkit, a relatively new component used for controlling system-wide privileges on Unix-like operating systems.

Unlike Sudo, which enables system administrators to give certain users the ability to run commands as root or another user, Polkit allows a finer level of control by delimiting distinct actions and users, and defining how the users can perform those actions.

Privilege escalation can be achieved through “wheel,” a special user group with administrative privileges. On Linux systems, the default user is automatically assigned to this group.
Read Stephen Coty, chief security evangelist at Alert Logic blog post here

“The problem pointed out by Alert Logic is two fold. First of all, the default Polkit configuration on many Unix systems (e.g. Ubuntu), does not require authentication. Secondly, the Polkit configuration essentially just maps the ‘wheels’ group, which is commonly used for Sudo users, to the Polkit ‘Admin’. This gives users in the ‘wheel’ group access to administrative functions, like installing packages, without having to enter a password,” explained Johannes Ullrich of the SANS Internet Storm Center.

Alert Logic has pointed out that the flaw mostly affects home users, but the company believes an attack could also work in a corporate environment where many users are assigned to the “wheel” group for one reason or another.


Polkit (formerly PolicyKit) is a component for controlling system-wide privileges in Unix-like operating systems. It provides an organized way for non-privileged processes to communicate with privileged ones. Polkit allows a level of control of centralized system policy. It is developed and maintained by David Zeuthen from Red Hat and hosted by the project. It is published as free software under the terms of version 2 of the GNU Library General Public License.
Fedora was the first distribution to include PolicyKit, and it has since been used in other distributions including Ubuntu since version 8.04 and openSUSE since version 10.3. Some distributions, like Fedora,have already switched to the rewritten polkit.

%d bloggers like this: