Saturday 29 December 2012

POST using Curl , Python and Lynx


Source: 

Hemanth.HM 


Assume that you have a form , in the source look for something similar to :
input name="rid"  type="TEXT"
input name="submit" value="SUBMIT" type="SUBMIT" align="center"
Then , based on your needs select any of the methods to POST the data to the form and get the required results back
Using curl :
curl -s -d "rid=value%&submit=SUBMIT" <URL> > out.html


Using Python :
#!/usr/bin/env python
import urllib
# URL to post to
url = 'URL'
values = {'rid':    rid, 
            'submit': 'SUBMIT'}
 
html = urllib.urlopen(url, urllib.urlencode(values)).read()
print html



Using Lynx to dump data :
 curl -sd 'rid=1ap05cs013&submit=SUBMIT' <URL> | lynx -dump -nolist -stdin > out.html

Friday 21 December 2012

my.cnf for large database + high end server


#
# The MySQL database server configuration file for Large innodb database.
# This is one of my better my.cnf

[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock

[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0

[mysqld]

user = mysql
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
skip-external-locking
#skip-innodb
default-storage-engine=InnoDB
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
#bind-address = 127.0.0.1
#bind-address = 1.2.3.4
#
# * Fine Tuning
#
key_buffer = 192M
max_allowed_packet = 64M
thread_stack = 192K
thread_cache_size       = 64
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover         = BACKUP
max_connections        = 800

interactive_timeout= 5
wait_timeout= 5

log-slow-queries = /var/log/mysql/slow_query.log
long_query_time = 10

table_cache            = 6048
thread_concurrency     = 22

join_buffer_size = 64M
sort_buffer_size = 16M
max_heap_table_size = 4412M
max_connect_errors = 10
tmp_table_size = 4412M




#*** MyISAM Specific options
key_buffer_size = 64M
read_buffer_size = 4M
read_rnd_buffer_size = 8M

myisam_sort_buffer_size = 128M
#myisam_max_sort_file_size = 10G
#myisam_max_extra_sort_file_size = 10G



#finally added
table_definition_cache = 2048


#default-storage-engine = innodb
innodb_buffer_pool_size = 10000M
innodb_log_file_size = 1500M
innodb_flush_method = O_DIRECT #O_DSYNC
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 4M
innodb_additional_mem_pool_size = 20M

# num cpu's/cores *2 is a good base line for innodb_thread_concurrency
innodb_thread_concurrency = 16




#
# * Query Cache Configuration
#
query_cache_limit = 1M
query_cache_size  = 128M
query_cache_min_res_unit = 1k

#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!
#general_log_file        = /var/log/mysql/mysql.log
#general_log             = 1

log_error                = /var/log/mysql/error.log

#server-id = 1
#log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size         = 100M


[mysqldump]
quick
quote-names
max_allowed_packet = 16M

[mysql]
#no-auto-rehash # faster start of mysql but no tab completition

[isamchk]
key_buffer = 16M

!includedir /etc/mysql/conf.d/


Saturday 20 October 2012

drush functions


drush php-eval 'user_delete_cron();'  &&
drush php-eval 'dblog_cron();'  &&
drush php-eval 'filter_cron();'  &&
drush php-eval 'node_cron();'  &&
drush php-eval 'ping_cron();'  &&
drush php-eval 'poll_cron();'  &&
drush php-eval 'statistics_cron();'  &&
drush php-eval 'update_cron();'  &&
drush php-eval 'captcha_cron();'  &&
drush php-eval 'ctools_cron();'  &&
drush php-eval 'db_maintenance_cron();'  &&
drush php-eval 'googleanalytics_cron();'  &&
drush php-eval 'image_cron();'  &&
drush php-eval 'messaging_cron();'  &&
drush php-eval 'notifications_cron();'  &&
drush php-eval 'privatemsg_cron();'  &&
drush php-eval 'scheduler_cron();'  &&
drush php-eval 'session_expire_cron();'  &&
drush php-eval 'spam_cron();'  &&
drush php-eval 'user_stats_cron();'  &&
drush php-eval 'votingapi_cron();
#drush php-eval 'system_cron();'  &&

Friday 7 September 2012

Reset mysql forgotten password


First of all you will need to ensure that your database is stopped:
root@steve:~# /etc/init.d/mysql stop
Now you should start up the database in the background, via the mysqld_safe command:
root@steve:~# /usr/bin/mysqld_safe --skip-grant-tables &
[1] 6702
Starting mysqld daemon with databases from /var/lib/mysql
mysqld_safe[6763]: started
Here you can see the new job (number "1") has started and the server is running with the process ID (PID) of 6702.
Now that the server is running with the --skip-grant-tables flag you can connect to it without a password and complete the job:
root@steve:~$ mysql --user=root mysql
Enter password:

mysql> update user set Password=PASSWORD('new-password-here') WHERE User='root';
Query OK, 2 rows affected (0.04 sec)
Rows matched: 2  Changed: 2  Warnings: 0

mysql> flush privileges;
Query OK, 0 rows affected (0.02 sec)

mysql> exit
Bye

Source

Friday 31 August 2012

apache downloads php in the browser

I see this problem in Debian 6. To fix just add the missing library:

apt-get install libapache2-mod-php5

and restart apache.

Monday 30 July 2012

Grant mysql permission

CREATE DATABASE mydb;
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER ON mydb.* TO 'myuser'@'localhost' IDENTIFIED BY 'password';
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER ON mydb.* TO 'myuser'@'localhost.localdomain' IDENTIFIED BY 'password';
FLUSH PRIVILEGES;
quit;

Monday 23 July 2012

iptables rules to counter common attacks

# Reject spoofed packets
iptables -A INPUT -s 10.0.0.0/8 -j DROP
iptables -A INPUT -s 169.254.0.0/16 -j DROP
iptables -A INPUT -s 172.16.0.0/12 -j DROP
iptables -A INPUT -s 127.0.0.0/8 -j DROP

iptables -A INPUT -s 224.0.0.0/4 -j DROP
iptables -A INPUT -d 224.0.0.0/4 -j DROP
iptables -A INPUT -s 240.0.0.0/5 -j DROP
iptables -A INPUT -d 240.0.0.0/5 -j DROP
iptables -A INPUT -s 0.0.0.0/8 -j DROP
iptables -A INPUT -d 0.0.0.0/8 -j DROP
iptables -A INPUT -d 239.255.255.0/24 -j DROP
iptables -A INPUT -d 255.255.255.255 -j DROP

# Stop smurf attacks
iptables -A INPUT -p icmp -m icmp --icmp-type address-mask-request -j DROP
iptables -A INPUT -p icmp -m icmp --icmp-type timestamp-request -j DROP
iptables -A INPUT -p icmp -m icmp -j DROP

# Drop all invalid packets
iptables -A INPUT -m state --state INVALID -j DROP
iptables -A FORWARD -m state --state INVALID -j DROP
iptables -A OUTPUT -m state --state INVALID -j DROP

# Drop excessive RST packets to avoid smurf attacks
iptables -A INPUT -p tcp -m tcp --tcp-flags RST RST -m limit --limit 2/second --limit-burst 2 -j ACCEPT

# Attempt to block portscans
# Anyone who tried to portscan us is locked out for an entire day.
iptables -A INPUT   -m recent --name portscan --rcheck --seconds 86400 -j DROP
iptables -A FORWARD -m recent --name portscan --rcheck --seconds 86400 -j DROP

# Once the day has passed, remove them from the portscan list
iptables -A INPUT   -m recent --name portscan --remove
iptables -A FORWARD -m recent --name portscan --remove

# These rules add scanners to the portscan list, and log the attempt.
iptables -A INPUT   -p tcp -m tcp --dport 139 -m recent --name portscan --set -j LOG --log-prefix "Portscan:"
iptables -A INPUT   -p tcp -m tcp --dport 139 -m recent --name portscan --set -j DROP

iptables -A FORWARD -p tcp -m tcp --dport 139 -m recent --name portscan --set -j LOG --log-prefix "Portscan:"
iptables -A FORWARD -p tcp -m tcp --dport 139 -m recent --name portscan --set -j DROP

Monday 16 July 2012

Mysql 4+ has a feature known as query cache. Here mysql caches the result set. So suppose a query is run and it takes 5 seconds to run and query cache is enabled, so results are cached in the cache. Next time if the same query is run again (remember - exactly same query that is strcmp(old_query, new_query) == 0) then the results are fetched from the cache and shown. And this takes very less time - say only 0.1 seconds.

I think, all of you who would be working with mysql for some time now, would be aware of this feature. The above para was just to refresh your memories.

Now lets check out the variables in mysql configuration file (my.cnf) which control the query cache.

mysql> show variables like '%query_cache%';
+------------------------------+---------+
| Variable_name | Value |
+------------------------------+---------+
| have_query_cache | YES |
| query_cache_limit | 1048576 |
| query_cache_min_res_unit | 4096 |
| query_cache_size | 0 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+------------------------------+---------+
6 rows in set (0.00 sec)


have_query_cache says whether mysql supports query cache.

query_cache_limit Dont cache results which are larger than this size. By default it is 1 MB. If your result set is larger, you can increase it as you like.

query_cache_min_res_unit The minimum size for blocks allocated by query cache. Default is 4096 Bytes (4KB). Will talk about this later.

query_cache_size Amount of memory allocated for caching results. Default is 0 - which disables query cache. You can set it to 128 MB or 1 GB. Depending on the memory available with your machine

query_cache_type 0 or OFF would turn query caching off. 1 or ON would turn the query cache on and the result set of every mysql query would then be cached. 2 or DEMAND would enable query cache but all result sets wont be cached. To cache results in this case you will have to specify "SQL_CACHE" in the query.

query_cache_wlock_invalidate Setting this variable to 1 causes acquisition of a WRITE lock for a table to invalidate any queries in the query cache that refer to that table. This forces other clients that attempt to access the table to wait while the lock is in effect.

Now lets see how query cache works and how to tune it.

mysql> show status like '%qcache%';
+-------------------------+-----------+
| Variable_name | Value |
+-------------------------+-----------+
| Qcache_free_blocks | 7 |
| Qcache_free_memory | 133638224 |
| Qcache_hits | 284 |
| Qcache_inserts | 626 |
| Qcache_lowmem_prunes | 0 |
| Qcache_not_cached | 0 |
| Qcache_queries_in_cache | 550 |
| Qcache_total_blocks | 1116 |
+-------------------------+-----------+
8 rows in set (0.00 sec)


Qcache_free_blocks Number of free memory blocks in query cache

Qcache_free_memory Amount of free memory in query cache

Qcache_hits Number of hits to the query cache. Or, the number of times a query was found in the query cache.

Qcache_inserts Number of queries inserted in the query cache.

Qcache_lowmem_prunes Number of queries that where deleted from the query cache due to low cache memory.

Qcache_not_cached Number of not-cached queries

Qcache_queries_in_cache Number of queries registered in the query cache

Qcache_total_blocks Total number of blocks in the query cache

So as and when queries are inserted in the cache, the Qcache_inserts and Qcache_queries_in_cache would increase. Qcache_free_memory would ofcourse decrease. Whenever any DML query is run on a table, the queries in the cache related to that table are removed. 

Some variables which let us know the efficiency of the query cache :

If the number of Qcache_hits is less than the number of queries_in_cache then the queries cached are not being used efficiently. And if Qcache_not_cached increases very quickly then queries are not being cached. This could be due to the fact that the result set of the queries are bigger than the variable query_cache_limit. So you should then increase this variable from its default value of 1M to 2M or maybe more.

If the variable Qcache_low_mem_prunes is increasing very fast, it would mean that the memory allocated to query cache is low. Cause mysql is freeing up memory to allocate new queries. Mysql is indirectly asking you to increase the query_cache_size

Mysql allocated memory for query result set in blocks. The default block size is 4K. So Qcache_free_blocks can be an indication of fragmentation. A high number as related to the Qcache_total_blocks means that the cache memory is seriously fragmentation. If the result set size is much less than 4K then fragmentation is high. There is another variable query_cache_min_res_unit which could then be used to decrease the block size from 4K to maybe 2K and help reduce fragmentation.

MySQL query cache is a very efficient tool if used properly.



Source 
Other related and useful articles:

- Optimizing the MySQL Query Cache

Saturday 7 July 2012

Iptables block common attacks


Following list summaries the common attack on any type of Linux computer:

Syn-flood protection

In this attack system is floods with a series of SYN packets. Each packets causes system to issue a SYN-ACK responses. Then system waits for ACK that follows the SYN+ACK (3 way handshake). Since attack never sends back ACK again entire system resources get fulled aka backlog queue. Once the queue is full system will ignored incoming request from legitimate users for services (http/mail etc). Hence it is necessary to stop this attack with iptables.

Force SYN packets check

Make sure NEW incoming tcp connections are SYN packets; otherwise we need to drop them:
iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP

Force Fragments packets check

Packets with incoming fragments drop them. This attack result into Linux server panic such data loss.
iptables -A INPUT -f -j DROP

XMAS packets

Incoming malformed XMAS packets drop them:
iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP

Drop all NULL packets

Incoming malformed NULL packets:
iptables -A INPIT -p tcp --tcp-flags ALL NONE -j DROP

Block Spoofing and bad addresses

Using iptables you can filter to drop suspicious source address. Network server should not accept packets claiming from the Internet that claim to originate from inside your network. Spoofing can be classified as:
a) IP spoofing – Disable the source address of authentication, for example rhosts based authentication. Filter RPC based services such as portmap and NFS,
b) DNS spoofing
Please see Iptables: How to avoid Spoofing and bad addresses attack tip for more information.
Also use NAT for your internal network. This makes difficult for attacker to spoof IP address from outside.

Filter incoming ICMP, PING traffic

It includes the ping of death attack and ICMP floods. You should block all ICMP and PING traffic for outside except for your own internal network (so that you can ping to see status of your own server) . See Linux : Iptables Allow or block ICMP ping request article.
Once system is secured, test your firewall with nmap or hping2 command:
# nmap -v -f FIREWALL-IP
# nmap -v -sX FIREWALL-IP
# nmap -v -sN FIREWALL-IP
# hping2 -X FIREWALL-IP

Saturday 23 June 2012

memcached on Debian for use with Drupal


1. Install memcached on your server.

  • Open the Terminal Window and enter :
apt-get install memcached libmemcached-tools

2. Install memcache PHP extension using PECL.

  • PECL is great for installing PHP extensions.
apt-get install php5-dev php-pear make
  • After you have installed PECL on your system, open the Terminal Window and enter :
pecl install memcache

3. Add memcache.so to php.ini

  • We must instruct PHP to load the extension.
  • You can do this by adding a file named memcache.ini to the configuration directory /etc/php5/conf.d
  • Open the Terminal Window and enter :
nano /etc/php5/fpm/conf.d/memcache.ini
  • Add the following line to the file and save :
extension=memcache.so
  • If you intend to use memcached with Drupal also add the following line to your php.ini or memcache.ini file and save :
memcache.hash_strategy="consistent"

4. Open firewall port 11211.

  • The default port for the memcached server is TCP port 11211.
  • Configure your firewall to open port 11211 for TCP traffic.

5. Configure the memcached allowed memory.

  • All memcached configuration settings can be found in /etc/memcached.conf
  • The default memory setting for memcached is 64 MB. 
  • Depending on the amount of RAM available on the server allocate a block of memory to memcached.
  • Open the Terminal Window and enter :
nano /etc/memcached.conf
  • Change the following line FROM-
# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default
# Note that the daemon will grow to this size, but does not start out holding this much
# memory
-m 64
  • TO the following by changing the -m 64 to -m 4096 to allow memcached 4 GB of RAM. Adjust the size in MB according to the memory that you have available. Save the file when done.
# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default
# Note that the daemon will grow to this size, but does not start out holding this much
# memory
-m 4096

6. Start the memcached service.

  • Open the Terminal Window and enter :
service memcached start
  • OR on older systems :
/etc/init.d/memcached start

7. Restart nginx.

  • Open the Terminal Window and enter :
service nginx restart
  • OR on older systems :
sudo /etc/init.d/nginx restart

8. Check to see if memcached server is active and listening on port 11211.

  • Open the Terminal Window and enter :
netstat -tap | grep memcached

9. Check the status and stats with memstat tool

  • Part of the memcached package is a handy tool called memstat.
  • You need to specify the host IP and port. In this case the host IP is 127.0.0.1 and the port 1211.
  • Open the Terminal Window and enter :
memstat 127.0.0.1:11211

10. Activate the Drupal memcached module.

  • Install the Drupal Memcache module and activate. For more complete instructions visit the Drupal Memcache Documentation
  • Edit settings.php in your Drupal installation to include memcache.inc
  • For Drupal 6, edit the settings.php file and add the following :
$conf['cache_inc'] ='sites/all/modules/memcache/memcache.inc';
  • For Drupal 7, edit the settings.php file and add the following :
$conf['cache_backends'][] = 'sites/all/modules/memcache/memcache.inc';
$conf['cache_default_class'] = 'MemCacheDrupal';
$conf['memcache_key_prefix'] = 'something_unique';
* note : Replace the "something_unique" in the last line with your own unique memcache key prefix. The memcache_key_prefix is also needed for both Drupal 6 & 7 in a multi-site environment if you would like to use memcached for more than one Drupal installation on the same server. 

Source  (with slight modifications to accout for Debian , nginx & php5-fpm)

Monday 11 June 2012

Convert Mysql enginge from Innodb to MyISAM

#!/bin/sh
DBNAME="DBName"
DBUSER="root"
DBPWD="YourPassword"
for t in $(mysql -u$DBUSER -p$DBPWD --batch --column-names=false -e "show tables" $DBNAME);
do
echo "Converting table $t"
mysql -u$DBUSER -p$DBPWD -e "alter table $t engine=MyISAM" $DBNAME;
done

Wednesday 23 May 2012

nginx php5-fpm on debian 6





First add the dotdeb repo to your sources.list file:
Code:
nano /etc/apt/sources.list
add this to the bottom of the file:
Code:
deb http://packages.dotdeb.org squeeze all
deb-src http://packages.dotdeb.org squeeze all
Next, add the GnuPG key to your distribution:
Code:
wget http://www.dotdeb.org/dotdeb.gpg
cat dotdeb.gpg | apt-key add -
rm dotdeb.gpg
Update APT:
Code:
apt-get update
Install php and php-fpm plus some common addons:
Code:
apt-get install php5 php5-fpm php-pear php5-common php5-mcrypt php5-mysql php5-cli php5-gd
Install nginx:
Code:
apt-get install nginx
Tweak the php-fpm configuration:
Code:
nano /etc/php5/fpm/pool.d/www.conf 
The following tweaks have been customized for a 512MB-1GB VPS. You can use the same numbers here or come up with your own. This is just what I have found to be the most resource friendly for a lightly used VPS:
Code:
pm.max_children = 25
pm.start_servers = 4
pm.min_spare_servers = 2
pm.max_spare_servers = 10
pm.max_requests = 500
This line is optional but I highly suggest you use it. Basically it's saying that if a php-fpm process hangs it will terminate it if it continues to hang for 30 seconds. This will add stability and reliability to your php application in the event there is a problem:
Code:
request_terminate_timeout = 30s
restart php-fpm:
Code:
/etc/init.d/php5-fpm restart
Tweak your nginx configuration:
Code:
nano /etc/nginx/nginx.conf
The client_max_body_size option changes the max from the 1MB default (which is a must for most php apps):
Code:
client_max_body_size 20M;
client_body_buffer_size 128k;
Remove the default vhost symlink:
Code:
cd /etc/nginx/sites-enabled
rm default
Create a fresh and clean vhost file:
Code:
nano /etc/nginx/sites-available/www.website.com
Code:
server {
                listen 80;
                server_name website.com www.website.com;

                access_log /var/log/nginx/website.access_log;
                error_log /var/log/nginx/website.error_log;

                root /var/www/www.website.com;
                index index.php index.htm index.html;

                location ~ .php$ {
                  fastcgi_pass   127.0.0.1:9000;
                  fastcgi_index  index.php;
                  fastcgi_param  SCRIPT_FILENAME /var/www/www.website.com$fastcgi_script_name;
                  include fastcgi_params;
                }
       }
Create a new symlink for the new vhost under sites-enabled:
Code:
ln -s /etc/nginx/sites-available/www.website.com /etc/nginx/sites-enabled/www.website.com
Restart nginx:
Code:
/etc/init.d/nginx restart
That's it. You should now have a clean installation of nginx and php-fpm with a single vhost running on port 80. If you want to have the same vhost running with ssl on port 443, copy and paste the entire vhost code into the bottom of the vhost file, change 'listen' to 443:
Code:
                listen 443;
Then add these lines that will point to your ssl certs:
Code:
                ssl on;
                ssl_certificate /path/to/certificate/www.website.com.crt;
                ssl_certificate_key /path/to/certificate_key/www.website.com.key;
Please let me know if there is anything I missed or anything that might need to be changed.

Reply With Quote

Monday 9 January 2012

Optimized my.cnf (for 16G memory)

#After A LOT of customization headaches here is the config that works pretty well on my server
#This may not be the most optimized but is definitely good as starting point for a busy mysql/php server



#
# The MySQL database server configuration file.
#

[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock

[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0

[mysqld]

user = mysql
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
skip-external-locking
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address = 127.0.0.1
#
# * Fine Tuning
#
key_buffer = 16M
max_allowed_packet = 128M
thread_stack = 192K
thread_cache_size       = 8
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover         = BACKUP
max_connections        = 150
table_cache            = 2048
thread_concurrency     = 16

join_buffer_size = 8M
sort_buffer_size = 8M
max_heap_table_size = 512M
max_connect_errors = 10
tmp_table_size = 512M


#*** MyISAM Specific options
key_buffer_size = 32M
read_buffer_size = 2M
read_rnd_buffer_size = 16M

myisam_sort_buffer_size = 128M
myisam_max_sort_file_size = 5G
myisam_max_extra_sort_file_size = 5G



#
# * Query Cache Configuration
#
query_cache_limit = 2M
query_cache_size        = 96M
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!
#general_log_file        = /var/log/mysql/mysql.log
#general_log             = 1

log_error                = /var/log/mysql/error.log

#server-id = 1
#log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size         = 100M


[mysqldump]
quick
quote-names
max_allowed_packet = 16M

[mysql]
#no-auto-rehash # faster start of mysql but no tab completition

[isamchk]
key_buffer = 16M

!includedir /etc/mysql/conf.d/