Thứ Tư, 18 tháng 12, 2013
Thứ Ba, 26 tháng 11, 2013
Mysql replication without downtime
I clearly don’t need to expound on the benefits of master-slave
replication for your MySQL database. It’s simply a good idea; one nicety
I looked forward to was the ability to run backups from the slave
without impacting the performance of our production database. But the
benefits abound.
Most tutorials on master-slave replication use a read lock to accomplish a consistent copy during initial setup. Barbaric! With our users sending thousands of cards and gifts at all hours of the night, I wanted to find a way to accomplish the migration without any downtime.
@pQd via ServerFault suggests enabling bin-logging and taking a non-locking dump with the binlog position included. In effect, you’re creating a copy of the db marked with a timestamp, which allows the slave to catch up once you’ve migrated the data over. This seems like the best way to set up a MySQL slave with no downtime, so I figured I’d document the step-by-step here, in case it proves helpful for others.
First, you’ll need to configure the master’s /etc/mysql/my.cnf by adding these lines in the [mysqld] section:
Next, create the backup file with the binlog position. It will affect the performance of your database server, but won’t lock your tables:
And now you have a newly minted mysql slave server without experiencing any downtime!
A parting tip: Sometimes errors occur in replication. For example, if you accidentally change a row of data on your slave. If this happens, fix the data, then run:
Most tutorials on master-slave replication use a read lock to accomplish a consistent copy during initial setup. Barbaric! With our users sending thousands of cards and gifts at all hours of the night, I wanted to find a way to accomplish the migration without any downtime.
@pQd via ServerFault suggests enabling bin-logging and taking a non-locking dump with the binlog position included. In effect, you’re creating a copy of the db marked with a timestamp, which allows the slave to catch up once you’ve migrated the data over. This seems like the best way to set up a MySQL slave with no downtime, so I figured I’d document the step-by-step here, in case it proves helpful for others.
First, you’ll need to configure the master’s /etc/mysql/my.cnf by adding these lines in the [mysqld] section:
server-id=1
binlog-format = mixed
log-bin=mysql-bin
datadir=/var/lib/mysql
innodb_flush_log_at_trx_commit=1
sync_binlog=1
Restart the master mysql server and create a replication user that your slave server will use to connect to the master:CREATE USER replicant@<<slave-server-ip>>;
GRANT REPLICATION SLAVE ON *.* TO replicant@<<slave-server-ip>> IDENTIFIED BY '<<choose-a-good-password>>';
Note: Mysql allows for passwords up to 32 characters for replication users.Next, create the backup file with the binlog position. It will affect the performance of your database server, but won’t lock your tables:
mysqldump --skip-lock-tables --single-transaction --flush-logs --hex-blob --master-data=2 -A > ~/dump.sql
Now, examine the head of the file and jot down the values for MASTER_LOG_FILE and MASTER_LOG_POS. You will need them later:head dump.sql -n80 | grep "MASTER_LOG_POS"
Because this file for me was huge, I gzip'ed it before transferring it to the slave, but that’s optional:gzip ~/dump.sql
Now we need to transfer the dump file to our slave server (if you didn’t gzip first, remove the .gz bit):scp ~/dump.sql.gz mysql-user@<<slave-server-ip>>:~/
While that’s running, you should log into your slave server, and edit your /etc/mysql/my.cnf file to add the following lines:server-id = 101
binlog-format = mixed
log_bin = mysql-bin
relay-log = mysql-relay-bin
log-slave-updates = 1
read-only = 1
Restart the mysql slave, and then import your dump file:gunzip ~/dump.sql.gz
mysql -u root -p < ~/dump.sql
Log into your mysql console on your slave server and run the following commands to set up and start replication:CHANGE MASTER TO MASTER_HOST='<<master-server-ip>>',MASTER_USER='replicant',MASTER_PASSWORD='<<slave-server-password>>', MASTER_LOG_FILE='<<value from above>>', MASTER_LOG_POS=<<value from above>>;
START SLAVE;
To check the progress of your slave:SHOW SLAVE STATUS \G
If all is well, Last_Error will be blank, and Slave_IO_State will report “Waiting for master to send event”. Look for Seconds_Behind_Master
which indicates how far behind it is. It took me a few hours to
accomplish all of the above, but the slave caught up in a matter of
minutes. YMMV.And now you have a newly minted mysql slave server without experiencing any downtime!
A parting tip: Sometimes errors occur in replication. For example, if you accidentally change a row of data on your slave. If this happens, fix the data, then run:
STOP SLAVE;SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;START SLAVE;
Update: In following my own post when setting up
another slave, I ran into an issue with authentication. The slave status
showed an error of 1045 (credential error) even though I was able to
directly connect using the replicant credentials. It turns out that
MySQL allows passwords up to 32 characters in length for master-slave
replication.
Thứ Hai, 25 tháng 11, 2013
Gearman - Giải pháp giao tiếp giữa các phân hệ (khác nhau về ngôn ngữ)
http://gearman.org/#how_does_gearman_work
Thứ Năm, 31 tháng 10, 2013
Thứ Năm, 24 tháng 10, 2013
Patch for nginx to allow case-insensitive http method
Vấn đề gặp phải: Nginx phân biệt hoa thường với các http method. Chỉ chấp nhận dạng upper case GET, POST, HEAD... dẫn tới có trường hợp bị lỗi 400 bad request (cụ thể là gặp với ứng dụng Iphone, truyền lên method là "get")
Cách fix: sửa file src/http/ngx_http_parse.c. Bổ sung dòng in đậm (line 150)
if (ch == CR || ch == LF) {
break;
}
int i;
for (i = 0; isalpha(p[i]); i++){
p[i] = toupper(p[i]);
ch = *p;
}
if ((ch < 'A' || ch > 'Z') && ch != '_') {
return NGX_HTTP_PARSE_INVALID_METHOD;
}
Source:https://gist.github.com/yatt/1908067
Lưu ý: code từ nguồn bị sai phần {}, phải sửa lại 1 chút mới chạy được
Cách fix: sửa file src/http/ngx_http_parse.c. Bổ sung dòng in đậm (line 150)
if (ch == CR || ch == LF) {
break;
}
int i;
for (i = 0; isalpha(p[i]); i++){
p[i] = toupper(p[i]);
ch = *p;
}
if ((ch < 'A' || ch > 'Z') && ch != '_') {
return NGX_HTTP_PARSE_INVALID_METHOD;
}
Source:https://gist.github.com/yatt/1908067
Lưu ý: code từ nguồn bị sai phần {}, phải sửa lại 1 chút mới chạy được
Thứ Tư, 23 tháng 10, 2013
Kiem soat timeout JXWS RI
MyWebService service =
new
MyWebService();
MyWebServicePortType client = service.MyWebServicePort();
Client cl = ClientProxy.getClient(client);
HTTPConduit http = (HTTPConduit) cl.getConduit();
HTTPClientPolicy httpClientPolicy =
new
HTTPClientPolicy();
httpClientPolicy.setConnectionTimeout(
0
);
httpClientPolicy.setReceiveTimeout(
0
);
http.setClient(httpClientPolicy);
client.doSomething(...);
Thứ Ba, 15 tháng 10, 2013
Thứ Năm, 26 tháng 9, 2013
Xóa bản ghi trùng lặp trong 1 bảng
Sử dụng cách này nhưng không rõ với db lớn thì sẽ thế nào, sau khi thử sẽ update lại kết quả
delete from table1
USING table1, table1 as vtable
WHERE (table1.ID > vtable.ID)
AND (table1.field_name=vtable.field_name)
delete from table1
USING table1, table1 as vtable
WHERE (table1.ID > vtable.ID)
AND (table1.field_name=vtable.field_name)
Chủ Nhật, 8 tháng 9, 2013
Thứ Bảy, 31 tháng 8, 2013
10 More Common Mistakes Java Developers Make when Writing SQL
I was positively surprised to see how popular my recent listing about 10 Common Mistakes Java Developers Make when Writing SQL was, both on my own blog and on my syndication partner DZone. The popularity shows a couple of things:
More background info:
By default, always use PreparedStatements instead of static statements. By default, never inline bind values into your SQL.
Bad effects on the Java application:
If you’re selecting * (star) or a “default” set of 50 columns, which you’re reusing among various DAOs, you’re transferring lots of data from the database into a JDBC ResultSet. Even if you’re not reading the data from the ResultSet, it has been transferred over the wire and loaded into your memory by the JDBC driver. That’s quite a waste of IO and memory if you know that you’re only going to need 2-3 of those columns.
This was obvious, but beware also of…
Bad effects on the database execution plan:
These effects may actually be much worse than the effects on the Java application. Sophisticated databases perform a lot of SQL transformation when calculating the best execution plan for your query. It may well be that some parts of your query can be “transformed away”, knowing that they won’t contribute to the projection (SELECT clause) or to the filtering predicates. I’ve recently blogged about this in the context of schema meta data:
How schema meta data impacts Oracle query transformations
Now, this is quite a beast. Think about a sophisticated SELECT that will join two views:
Each of the views that are joined to the above joined table reference
might again join data from dozens of tables, such as CUSTOMER_ADDRESS,
ORDER_HISTORY, ORDER_SETTLEMENT, etc. Given the SELECT * projection,
your database has no choice but to fully perform the loading of all
those joined tables, when in fact, the only thing that you were
interested in was this:
A good database will transform your SQL in a way that most of the
“hidden” joins can be removed, which results in much less IO and memory
consumption within the database.
The Cure:
Never execute SELECT *. Never reuse the same projection for various queries. Always try to reduce the projection to the data that you really need.
Note that this can be quite hard to achieve with ORMs.
Depending on the complexity of the table reference, some databases also accept sophisticated table references in other statements, such as INSERT, UPDATE, DELETE, MERGE. See Oracle’s manuals for instance, explaining how to create updatable views.
The Cure:
Always think of your
Can you spot the join predicate? What if we joined dozens of tables?
This gets much worse when applying proprietary syntaxes for outer join,
such as Oracle’s (+) syntax.
The Cure:
Always use the ANSI JOIN syntax. Never put JOIN predicates into the
The Cure:
Always think of proper escaping when using the LIKE predicate.
Or in other words, when
Don’t believe it? See this SQL Fiddle for yourself. It shows that the following query yields no result:
More details can be seen in my previous blog post on that subject, which also shows some SQL dialect incompatibilities in that area.
The Cure:
Beware of the
But even the NULL predicate is subtle. Beware that the two following predicates are only equivalent for row value expressions of degree 1:
The Cure:
When using row value expressions, beware of the NULL predicate, which might not work as expected.
As can be seen, this syntax is slightly more concise than the
equivalent syntax where each column from the predicate’s left-hand side
is compared with the corresponding column on the right-hand side. This
is particularly true if many independent predicates are combined with
AND. Using row value expressions allows you to combine correlated
predicates into one. This is most useful for join expressions on
composite foreign keys:
Unfortunately, not all databases support row value expressions in the
same way. But the SQL standard had defined them already in 1992,
and if you use them, sophisticated databases like Oracle or Postgres
can use them for calculating better execution plans. This is explained
on the popular Use The Index, Luke page.
The Cure:
Use row value expressions whenever you can. They will make your SQL more concise and possibly even faster.
The Cure:
Define as many constraints as you can. They will help your database to perform better when querying.
For the others who are forced (or chose) to stick with proven relational databases, don’t be tricked into thinking that modern databases are slow. They’re hyper fast. In fact, they’re so fast, they can parse your 20kb query text, calculate 2000-line execution plans, and actually execute that monster in less than a millisecond, if you and your DBA get along well and tune your database to the max.
They may be slow because of your application misusing a popular ORM, or because that ORM won’t be able to produce fast SQL for your complex querying logic. In that case, you may want to chose a more SQL-centric API like JDBC, jOOQ or MyBatis that will let you get back in control of your SQL.
So, don’t think that a query execution of 50ms is fast or even acceptable. It’s not. If you get these speeds at development time, make sure you investigate execution plans. Those monsters might explode in production, where you have more complex contexts and data.
Source: http://blog.jooq.org/2013/08/12/10-more-common-mistakes-java-developers-make-when-writing-sql/
- How important SQL is to the professional Java world.
- How common it is to forget about some basic SQL things.
- How well SQL-centric libraries such as jOOQ or MyBatis are responding to market needs, by embracing SQL. An amusing fact is that users have even mentioned my blog post on SLICK’s mailing list. SLICK is a non-SQL-centric database access library in Scala. Like LINQ (and LINQ-to-SQL) it focuses on language integration, not on SQL code generation.
1. Not using PreparedStatements
Interestingly, this mistake or misbelief still surfaces blogs, forums and mailing lists many years after the appearance of JDBC, even if it is about a very simple thing to remember and to understand. It appears that some developers refrain from using PreparedStatements for any of these reasons:- They don’t know about PreparedStatements
- They think that PreparedStatements are slower
- They think that writing a PreparedStatement takes more effort
- You can omit syntax errors originating from bad string concatenation when inlining bind values.
- You can omit SQL injection vulnerabilities from bad string concatenation when inlining bind values.
- You can avoid edge-cases when inlining more “sophisticated” data types, such as TIMESTAMP, binary data, and others.
- You can keep open PreparedStatements around for a while, reusing them with new bind values instead of closing them immediately (useful in Postgres, for instance).
- You can make use of adaptive cursor sharing (Oracle-speak) in more sophisticated databases. This helps prevent hard-parsing SQL statements for every new set of bind values.
- DELETED = 1
- STATUS = 42
- FIRST_NAME LIKE “Jon%”
- AMOUNT > 19.95
More background info:
- Caveats of bind value peeking: An interesting blog post by Oracle Guru Tanel Poder on the subject
- Cursor sharing. An interesting Stack Overflow question.
By default, always use PreparedStatements instead of static statements. By default, never inline bind values into your SQL.
2. Returning too many columns
This mistake is quite frequent and can lead to very bad effects both in your database’s execution plan and in your Java application. Let’s look at the second effect first:Bad effects on the Java application:
If you’re selecting * (star) or a “default” set of 50 columns, which you’re reusing among various DAOs, you’re transferring lots of data from the database into a JDBC ResultSet. Even if you’re not reading the data from the ResultSet, it has been transferred over the wire and loaded into your memory by the JDBC driver. That’s quite a waste of IO and memory if you know that you’re only going to need 2-3 of those columns.
This was obvious, but beware also of…
Bad effects on the database execution plan:
These effects may actually be much worse than the effects on the Java application. Sophisticated databases perform a lot of SQL transformation when calculating the best execution plan for your query. It may well be that some parts of your query can be “transformed away”, knowing that they won’t contribute to the projection (SELECT clause) or to the filtering predicates. I’ve recently blogged about this in the context of schema meta data:
How schema meta data impacts Oracle query transformations
Now, this is quite a beast. Think about a sophisticated SELECT that will join two views:
1
2
3
4
| SELECT * FROM customer_view c JOIN order_view o ON c.cust_id = o.cust_id |
1
2
3
4
| SELECT c.first_name, c.last_name, o.amount FROM customer_view c JOIN order_view o ON c.cust_id = o.cust_id |
The Cure:
Never execute SELECT *. Never reuse the same projection for various queries. Always try to reduce the projection to the data that you really need.
Note that this can be quite hard to achieve with ORMs.
3. Thinking that JOIN is a SELECT clause
This isn’t a mistake with a lot of impact on performance or SQL correctness, but nevertheless, SQL developers should be aware of the fact that the JOIN clause is not part of the SELECT statement per se. The SQL standard 1992 defines atable reference
as such:6.3 <table reference> <table reference> ::= <table name> [ [ AS ] <correlation name> [ <left paren> <derived column list> <right paren> ] ] | <derived table> [ AS ] <correlation name> [ <left paren> <derived column list> <right paren> ] | <joined table>The
FROM
clause and also the joined table
can then make use of such table references
:7.4 <from clause> <from clause> ::= FROM <table reference> [ { <comma> <table reference> }... ] 7.5 <joined table> <joined table> ::= <cross join> | <qualified join> | <left paren> <joined table> <right paren> <cross join> ::= <table reference> CROSS JOIN <table reference> <qualified join> ::= <table reference> [ NATURAL ] [ <join type> ] JOIN <table reference> [ <join specification> ]Relational databases are very table-centric. Many operations are performed on physical, joined or derived tables in one way or another. To write SQL effectively, it is important to understand that the
SELECT .. FROM
clause expects a comma-separated list of table references in whatever form they may be provided.Depending on the complexity of the table reference, some databases also accept sophisticated table references in other statements, such as INSERT, UPDATE, DELETE, MERGE. See Oracle’s manuals for instance, explaining how to create updatable views.
The Cure:
Always think of your
FROM
clause to expect a table reference as a whole. If you write a JOIN
clause, think of this JOIN
clause to be part of a complex table reference:
1
2
3
4
5
6
| SELECT c.first_name, c.last_name, o.amount FROM customer_view c JOIN order_view o ON c.cust_id = o.cust_id |
4. Using pre-ANSI JOIN syntax
Now that we’ve clarified how table references work (see the previous point), it should become a bit more obvious that the pre-ANSI JOIN syntax should be avoided at all costs. To execution plans, it usually makes no difference if you specify join predicates in theJOIN .. ON
clause or in the WHERE
clause. But from a readability and maintenance perspective, using the WHERE
clause for both filtering predicates and join predicates is a major quagmire. Consider this simple example:
1
2
3
4
5
6
| SELECT c.first_name, c.last_name, o.amount FROM customer_view c, order_view o WHERE o.amount > 100 AND c.cust_id = o.cust_id AND c.language = 'en' |
The Cure:
Always use the ANSI JOIN syntax. Never put JOIN predicates into the
WHERE
clause. There is absolutely no advantage to using the pre-ANSI JOIN syntax.5. Forgetting to escape input to the LIKE predicate
The SQL standard 1992 specifies thelike predicate
as such:8.5 <like predicate> <like predicate> ::= <match value> [ NOT ] LIKE <pattern> [ ESCAPE <escape character> ]The
ESCAPE
keyword should be used almost always when
allowing for user input to be used in your SQL queries. While it may be
rare that the percent sign (%) is actually supposed to be part of the
data, the underscore (_) might well be:
1
2
3
| SELECT * FROM t WHERE t.x LIKE 'some!_prefix%' ESCAPE '!' |
Always think of proper escaping when using the LIKE predicate.
6. Thinking that NOT (A IN (X, Y)) is the boolean inverse of A IN (X, Y)
This one is subtle but very important with respect to NULLs! Let’s review whatA IN (X, Y)
really means:A IN (X, Y) is the same as A = ANY (X, Y) is the same as A = X OR A = YWhen at the same time,
NOT (A IN (X, Y))
really means:NOT (A IN (X, Y)) is the same as A NOT IN (X, Y) is the same as A != ANY (X, Y) is the same as A != X AND A != YThat looks like the boolean inverse of the previous predicate, but it isn’t! If any of
X
or Y
is NULL
, the NOT IN
predicate will result in UNKNOWN
whereas the IN
predicate might still return a boolean value.Or in other words, when
A IN (X, Y)
yields TRUE
or FALSE
, NOT(A IN (X, Y))
may still yield UNKNOWN
instead of FALSE
or TRUE
. Note, that this is also true if the right-hand side of the IN
predicate is a subquery.Don’t believe it? See this SQL Fiddle for yourself. It shows that the following query yields no result:
1
2
3
4
5
| SELECT 1 WHERE 1 IN ( NULL ) UNION ALL SELECT 2 WHERE NOT (1 IN ( NULL )) |
The Cure:
Beware of the
NOT IN
predicate when nullable columns are involved!7. Thinking that NOT (A IS NULL) is the same as A IS NOT NULL
Right, so we remembered that SQL implements three-valued logic when it comes to handling NULL. That’s why we can use the NULL predicate to check for NULL values. Right? Right.But even the NULL predicate is subtle. Beware that the two following predicates are only equivalent for row value expressions of degree 1:
NOT (A IS NULL) is not the same as A IS NOT NULLIf A is a row value expression with a degree of more than 1, then the truth table is transformed such that:
- A IS NULL yields true only if all values in A are NULL
- NOT(A IS NULL) yields false only if all values in A are NULL
- A IS NOT NULL yields true only if all values in A are NOT NULL
- NOT(A IS NOT NULL) yields false only if all values in A are NOT NULL
The Cure:
When using row value expressions, beware of the NULL predicate, which might not work as expected.
8. Not using row value expressions where they are supported
Row value expressions are an awesome SQL feature. When SQL is a very table-centric language, tables are also very row-centric. Row value expressions let you describe complex predicates much more easily, by creating local ad-hoc rows that can be compared with other rows of the same degree and row type. A simple example is to query customers for first names and last names at the same time.
1
2
3
| SELECT c.address FROM customer c, WHERE (c.first_name, c.last_name) = (?, ?) |
1
2
3
4
| SELECT c.first_name, c.last_name, a.street FROM customer c JOIN address a ON (c.id, c.tenant_id) = (a.id, a.tenant_id) |
The Cure:
Use row value expressions whenever you can. They will make your SQL more concise and possibly even faster.
9. Not defining enough constraints
So, I’m going to cite Tom Kyte and Use The Index, Luke again. You cannot have enough constraints in your meta data. First off, constraints help you keep your data from corrupting, which is already very useful. But to me, more importantly, constraints will help the database perform SQL transformations, as the database can decide that- Some values are equivalent
- Some clauses are redundant
- Some clauses are “void” (i.e. they will not return any values)
The Cure:
Define as many constraints as you can. They will help your database to perform better when querying.
10. Thinking that 50ms is fast query execution
The NoSQL hype is still ongoing, and many companies still think they’re Twitter or Facebook in dire need of faster, more scalable solutions, escaping ACID and relational models to scale horizontally. Some may succeed (e.g. Twitter or Facebook), others may run into this:For the others who are forced (or chose) to stick with proven relational databases, don’t be tricked into thinking that modern databases are slow. They’re hyper fast. In fact, they’re so fast, they can parse your 20kb query text, calculate 2000-line execution plans, and actually execute that monster in less than a millisecond, if you and your DBA get along well and tune your database to the max.
They may be slow because of your application misusing a popular ORM, or because that ORM won’t be able to produce fast SQL for your complex querying logic. In that case, you may want to chose a more SQL-centric API like JDBC, jOOQ or MyBatis that will let you get back in control of your SQL.
So, don’t think that a query execution of 50ms is fast or even acceptable. It’s not. If you get these speeds at development time, make sure you investigate execution plans. Those monsters might explode in production, where you have more complex contexts and data.
Conclusion
SQL is a lot of fun, but also very subtle in various ways. It’s not easy to get it right as my previous blog post about 10 common mistakes has shown. But SQL can be mastered and it’s worth the trouble. Data is your most valuable asset. Treat data with respect and write better SQL.Source: http://blog.jooq.org/2013/08/12/10-more-common-mistakes-java-developers-make-when-writing-sql/
Thứ Hai, 26 tháng 8, 2013
Kiểm soát timeout khi mount NFS
sudo mount localhost:xxx xxx -o bg,intr,soft,timeo=1,retrans=1,actimeo=1,retry=1
Trong đó:
Take a look at this link for more information about NFS mount options
Link tham khảo: http://unix.stackexchange.com/questions/31979/stop-broken-nfs-mounts-from-locking-a-directory
Trong đó:
bg,intr,soft
You can in addition set:bg If the first NFS mount attempt times out, retry the mount in the background. After a mount operation is backgrounded, all subsequent mounts on the same NFS server will be backgrounded immediately, without first attempting the mount. A missing mount point is treated as a timeout, to allow for nested NFS mounts. soft If an NFS file operation has a major timeout then report an I/O error to the calling program. The default is to continue retrying NFS file operations indefinitely. intr If an NFS file operation has a major timeout and it is hard mounted, then allow signals to interupt the file operation and cause it to return EINTR to the calling program. The default is to not allow file operations to be interrupted.
timeo=5,retrans=5,actimeo=10,retry=5
which should allow the NFS mount to timeout and make the directory
inaccessible if the NFS server drops the connection rather then waiting
in retries.Take a look at this link for more information about NFS mount options
Link tham khảo: http://unix.stackexchange.com/questions/31979/stop-broken-nfs-mounts-from-locking-a-directory
Thứ Tư, 3 tháng 7, 2013
Symfony-Doctrine: free memory leak tips
- Disable debug mode. Add following before db connection initialize
sfConfig::set('sf_debug', false);
- Set auto query object free attribute for db connection
$connection->setAttribute(Doctrine_Core::ATTR_AUTO_FREE_QUERY_OBJECTS, true );
- Free all object after use
$object = createBigObject(); $object->save(); $object->free( true );
unset($object);
$q = Doctrine_Query::create() ->from( 'User u' ); $results = $q->fetchArray(); $q->free();
- Unset all arrays after use
unset($array_name)
- Cấu hình file databases.yml:
Cấu hình file databases.yml:all: doctrine: class: sfDoctrineDatabase param: dsn: 'mysql:host=localhost;dbname=.......' username: ..... password: ..... profiler: false
Nguồn: tổng hợp
Advanced PHP Error Handling via PHP
If you are having trouble handling PHP errors using htaccess, these three items are the first things to check. If it turns out that you are unable to use htaccess to work with PHP errors, don’t despair — this article explains how to achieve the same goals using local
Step 1: Create a custom
Using a text editor, create a file named “php.ini” and add the following PHP directives 4:
Step 3: Secure your custom
Once everything is working, it is important to protect your domain by
securing your newly created files. In addition to setting permissions
to
php.ini
files. To implement this strategy, the following is required:- Ability to create/edit a
php.ini
file in yourpublic_html
directory - A server running PHP via CGI (e.g., phpSuExec), not Apache 2
- Ability to edit/change permissions for files on your server
- Access/editing privileges for htaccess files (not required)
php.ini
file. After explaining the implementation process for production
environments, we will explore several useful functional customizations
for both production and development servers 3. Excited? Great, let’s begin..
Step 1: Create a custom php.ini
file in your site’s root directory
Using a text editor, create a file named “php.ini” and add the following PHP directives 4:;;; php error handling for production servers
display_startup_errors = off
display_errors = off
html_errors = off
log_errors = on
docref_root = 0
docref_ext = 0
error_log = /var/log/php/errors/php_error.log
Here, we are disabling all public error displays and enabling private
error logging in the specified file. After editing the path and file
name of the error log in the last line, save the file and upload it to
the root directory of your domain. Generally, this directory is named “public_html
”, but it may be different on your server. Then, create the specified log file and ensure that it is writable (via 755
or 777
) by the server. Step 2: Enable subdirectory inheritance of custom settings
At this point, error logging should be working, but only for the same directory in which you have placed thephp.ini
file. Unfortunately, by default, locally specified php.ini
directives only affect the directory in which they are located — they are not inherited by subdirectories as they are for htaccess directives. Thus, each directory for which you would like to log errors requires its own copy of the php.ini
file. Fortunately, if you are able to access and edit your site’s root htaccess file, there is an easy way to enable subdirectory inheritance of your custom php.ini
settings. Simply add the following code to your site’s root htaccess file:# enable subdirectory inheritance of custom php settings
suPHP_ConfigPath /home/path/public_html
This trick takes advantage of htaccess’ inheritance properties by using them to “point” from each subdirectory to the custom php.ini
file in the root (e.g., public_html
) directory. Note that you may override the root php.ini
directives by placing alternate php.ini
files in the target subdirectory.
Step 3: Secure your custom php.ini
and log files
Once everything is working, it is important to protect your domain by
securing your newly created files. In addition to setting permissions
to 600
for your custom php.ini
file(s), you may also want to add the following directives to your root htaccess file:# deny access to php.ini
<Files php.ini>
order allow,deny
deny from all
satisfy all
</Files>
# deny access to php error log
<Files php_error.log>
order allow,deny
deny from all
satisfy all
</Files>
And that’s it — PHP
error logging should now be securely enabled on your domain. Now let’s
explore some useful functional customizations for both production and
development servers..Controlling the level of PHP error reporting
Using your customphp.ini
file, it is possible to set
the level of error reporting to suit your particular needs. The general
format for controlling the level of PHP errors is as follows:;;; general directive for setting php error level
error_reporting = integer
There are several common values used for “integer”, including:- Complete error reporting — for complete PHP error logging, use an error-reporting integer value of “
8191
”, which will enable logging of everything except run-time notices. 5 - Zend error reporting — to record both fatal and
non-fatal compile-time warnings generated by the Zend scripting engine,
use an error-reporting integer value of “
128
”. - Basic error reporting — to record run-time notices, compile-time parse errors, as well as run-time errors and warnings, use “
8
” for the error-reporting integer value. - Minimal error reporting — to record only fatal run-time errors, use an error-reporting integer value of “
1
”, which will enable logging of unrecoverable errors.
Setting the maximum file size for your error strings
Using your customphp.ini
file, you may specify a maximum size for your PHP errors. This controls the size of each logged error, not the overall file size. Here is the general syntax:;;; general directive for setting max error size
log_errors_max_len = integer
Here, “integer” represents the maximum size of each recorded error string as measured in bytes. The default value is “1024
” (i.e., 1 kilobyte). To unleash your logging powers to their fullest extent, you may use a zero value, “0
”,
to indicate “no maximum” and thus remove all limits. Note that this
value is also applied to displayed errors when they are enabled (e.g.,
during development).Disable logging of repeated errors
If you remember the last time you examined a healthy (or sick, depending on your point of view) PHP error log, you may recall countless entries of nearly identical errors, where the only difference for each line is the timestamp of the event. If you would like to disable this redundancy, throw down the following code in your customphp.ini
file:;;; disable repeated error logging
ignore_repeated_errors = true
ignore_repeated_source = true
With these lines in place, repeated errors will not be logged, even
if they are from different sources or locations. If you only want to
disable repeat errors from the same source or file, simply comment out
or delete the last line [note: comments must begin with a semicolon ( ;
) in php.ini
files]. Conversely, to ensure that your log file includes all repeat errors, change both of the true
values to false
.Putting it all together — Production Environment
Having discussed a few of the useful ways to customize our PHP error-logging experience, let’s wrap it all up with a solid,php.ini
-based error-handling strategy for generalized production environments. Here is the code for your custom php.ini
file:;;; php error handling for production servers
display_startup_errors = false
display_errors = false
html_errors = false
log_errors = true
ignore_repeated_errors = false
ignore_repeated_source = false
report_memleaks = true
track_errors = true
docref_root = 0
docref_ext = 0
error_log = /var/log/php/errors/php_error.log
error_reporting = 999999999
log_errors_max_len = 0
Or, if you prefer, an explanatory version of the same code, using comments to explain each line:;;; php error handling for production servers
; disable display of startup errors
display_startup_errors = false
; disable display of all other errors
display_errors = false
; disable html markup of errors
html_errors = false
; enable logging of errors
log_errors = true
; disable ignoring of repeat errors
ignore_repeated_errors = false
; disable ignoring of unique source errors
ignore_repeated_source = false
; enable logging of php memory leaks
report_memleaks = true
; preserve most recent error via php_errormsg
track_errors = true
; disable formatting of error reference links
docref_root = 0
; disable formatting of error reference links
docref_ext = 0
; specify path to php error log
error_log = /var/log/php/errors/php_error.log
; specify recording of all php errors
error_reporting = 999999999
; disable max error string length
log_errors_max_len = 0
This PHP
error-handling strategy is ideal for a generalized production
environment. In a nutshell, this code secures your server by disabling
public display of error messages, yet also enables complete error
transparency for the administrator via private error log. Of course, you
may wish to customize this code to suit your specific needs. As always,
please share your thoughts, ideas, tips and tricks with our fellow
readers. Now, let’s take a look at a generalized error-handling strategy
for development environments..Putting it all together — Development Environment
During project development, when public access to your project is unavailable, you may find it beneficial to catch PHP errors in real time, where moment-by-moment circumstances continue to evolve. Here is a generalized,php.ini
-based PHP error-handling strategy for development environments. Place this code in your custom php.ini
file:;;; php error handling for production servers
display_startup_errors = true
display_errors = true
html_errors = true
log_errors = true
ignore_repeated_errors = false
ignore_repeated_source = false
report_memleaks = true
track_errors = true
docref_root = 0
docref_ext = 0
error_log = /var/log/php/errors/php_error.log
error_reporting = 999999999
log_errors_max_len = 0
For this code, we will forego the line-by-line explanations, as they may be extrapolated from the previous section. This PHP
error-handling strategy is ideal for a generalized development
environment. In a nutshell, this code enables real-time error-handling
via public display of error messages, while also enabling complete error
transparency for the administrator via private error log. Of course,
you may wish to customize this code to suit your specific needs. As
always, please share your thoughts, ideas, tips and tricks with our
fellow readers. Whew! That about does it for this article.. — see you next time!
Footnotes
- 1 To determine if your server is running PHP via phpSuExec (i.e., CGI) instead of Apache, upload a
phpinfo()
file and check the “Server API” near the top of the file. If it says “Apache”, PHP is running on Apache; if it says “CGI”, PHP is running via phpSuExec. - 2 This is important because it is impossible to manipulate
php.ini
directives via htaccess while running PHP on phpSuExec. - 3 For more information, check out the manual on Error Handling and Logging Functions at php.net
- 4 Many thanks to Jeff N. at A Small Orange for helping with the information provided in this article :)
- 5 Due to the bitwise nature of the various
error-reporting values, the value for logging all errors continues to
increase. For example, in PHP 5.2.x, its value is
6143
, and before that, its value was2047
. Thus, to ensure comprehensive error logging well into the future, it is advisable to set a very large value forerror_reporting
, such as2147483647
.
Chủ Nhật, 23 tháng 6, 2013
Cau hinh my.cnf
#Ram 16G
#CPU 8core
[mysqld]
# GENERAL
datadir= /var/lib/mysql
socket = /var/lib/mysql/mysql.sock
pid_file = /var/lib/mysql/mysql.pid
user = mysql
port = 3306
storage_engine = InnoDB
# INNODB
innodb_buffer_pool_size = 12GB
innodb_log_file_size = 128MB
innodb_log_files_in_group = 128MB
innodb_file_per_table = 1
innodb_flush_method = O_DIRECT
innodb_log_buffer_size = 16
sort_buffer_size = 8MB
join_buffer_size = 8M
# MyISAM
key_buffer_size = 512MB
# LOGGING
log_error = /var/lib/mysql/mysql-error.log
log_slow_queries= /var/lib/mysql/mysql-slow.log
# OTHER
tmp_table_size = 32M
max_heap_table_size = 32M
query_cache_type = 1
query_cache_size = 512MB
query_cache_limit = 8MB
thread_concurrency = 16
max_connections = 1000
thread_cache_size = 256
table_cache_size = 10000
open_files_limit = 65535
#Safety and Sanity Settings
expire_logs_days = 7
max_allowed_packet = 64MB
max_connect_errors = 1000
wait_timeout = 7200
connect_timeout = 20
skip_name_resolve
symbolic-links=0
[client]
socket= /var/lib/mysql/mysql.sock
port = 3306
#CPU 8core
[mysqld]
# GENERAL
datadir= /var/lib/mysql
socket = /var/lib/mysql/mysql.sock
pid_file = /var/lib/mysql/mysql.pid
user = mysql
port = 3306
storage_engine = InnoDB
# INNODB
innodb_buffer_pool_size = 12GB
innodb_log_file_size = 128MB
innodb_log_files_in_group = 128MB
innodb_file_per_table = 1
innodb_flush_method = O_DIRECT
innodb_log_buffer_size = 16
sort_buffer_size = 8MB
join_buffer_size = 8M
# MyISAM
key_buffer_size = 512MB
# LOGGING
log_error = /var/lib/mysql/mysql-error.log
log_slow_queries= /var/lib/mysql/mysql-slow.log
# OTHER
tmp_table_size = 32M
max_heap_table_size = 32M
query_cache_type = 1
query_cache_size = 512MB
query_cache_limit = 8MB
thread_concurrency = 16
max_connections = 1000
thread_cache_size = 256
table_cache_size = 10000
open_files_limit = 65535
#Safety and Sanity Settings
expire_logs_days = 7
max_allowed_packet = 64MB
max_connect_errors = 1000
wait_timeout = 7200
connect_timeout = 20
skip_name_resolve
symbolic-links=0
[client]
socket= /var/lib/mysql/mysql.sock
port = 3306
Thứ Năm, 2 tháng 5, 2013
MySQL Galera Gluster toàn tập
http://www.fromdual.com/building-galera-replication-from-scratch
http://www.sebastien-han.fr/blog/2012/04/01/mysql-multi-master-replication-with-galera/
http://www.codership.com/wiki/doku.php?id=info
http://codership.com/content/using-galera-cluster
http://www.percona.com/files/presentations/percona-live/nyc-2012/PLNY12-galera-cluster-best-practices.pdf
http://www.severalnines.com/blog/scaling-drupal-multiple-servers-galera-cluster-mysql
http://www.percona.com/live/mysql-conference-2012/sessions/how-evaluate-which-mysql-high-availability-solution-best-suits-you
http://www.mysqlperformanceblog.com/2013/03/03/investigating-replication-latency-in-percona-xtradb-cluster/
Benchmark
http://www.mysqlperformanceblog.com/2011/10/13/benchmarking-galera-replication-overhead/
http://www.codership.com/files/presentations/Galera_Tutorial_UC2012.pdf
http://www.sebastien-han.fr/blog/2012/04/01/mysql-multi-master-replication-with-galera/
http://www.codership.com/wiki/doku.php?id=info
http://codership.com/content/using-galera-cluster
http://www.percona.com/files/presentations/percona-live/nyc-2012/PLNY12-galera-cluster-best-practices.pdf
http://www.severalnines.com/blog/scaling-drupal-multiple-servers-galera-cluster-mysql
http://www.percona.com/live/mysql-conference-2012/sessions/how-evaluate-which-mysql-high-availability-solution-best-suits-you
http://www.mysqlperformanceblog.com/2013/03/03/investigating-replication-latency-in-percona-xtradb-cluster/
Benchmark
http://www.mysqlperformanceblog.com/2011/10/13/benchmarking-galera-replication-overhead/
http://www.codership.com/files/presentations/Galera_Tutorial_UC2012.pdf
Thứ Tư, 24 tháng 4, 2013
Tuning the Apache Prefork MPM
Apache uses a set of values called the Prefork MPM to determine how many
servers it will utilize and how many threads each server can process.
Out of the box all Apache installations use the same values regardless
of whether your server has 512Mb of RAM or 8Gb of RAM. It is important
that as the server administrator you configure these values to work with
your server load.
The Apache Prefork MPM can be found in the Apache configuration file; usually /etc/httpd/conf/httpd.conf. The default values are...
<IfModule prefork.c>
StartServers 2
MinSpareServers 3
MaxSpareServers 3
ServerLimit 75
MaxClients 75
MaxRequestsPerChild 1000
</IfModule>
Each Directive taken from "http://httpd.apache.org/docs/trunk/mod/mpm_common.html" is detailed below.
- - - - - - - - - - - -
The StartServers directive sets the number of child server processes created on startup. As the number of processes is dynamically controlled depending on the load there is usually little reason to adjust this parameter.
- - - - - - - - - - - -
The MinSpareServers directive sets the desired minimum number of idle child server processes. An idle process is one which is not handling a request. If there are fewer than MinSpareServers idle then the parent process creates new children until satisfies the MinSpareServers setting.
- - - - - - - - - - - -
The MaxSpareServers directive sets the desired maximum number of idle child server processes. An idle process is one which is not handling a request. If there are more than MaxSpareServers idle, then the parent process will kill off the excess processes.
- - - - - - - - - - - -
The ServerLimit directive is only used if you need to set MaxClients higher than 256 (default). Do not set the value of this directive any higher than what you might want to set MaxClients to.
- - - - - - - - - - - -
The MaxClients directive sets the limit on the number of simultaneous requests that will be served. Any connection attempts over the MaxClients limit will normally be queued, up to a number based on the ListenBacklog directive. Once a child process is freed at the end of a different request, the connection will then be serviced.
For non-threaded servers (i.e., prefork), MaxClients translates into the maximum number of child processes that will be launched to serve requests. The default value is 256; to increase it, you must also raise ServerLimit.
- - - - - - - - - - - -
The MaxConnectionsPerChild directive sets the limit on the number of connections that an individual child server process will handle. After MaxConnectionsPerChild connections, the child process will die. If MaxConnectionsPerChild is 0, then the process will never expire.
Setting MaxConnectionsPerChild to a non-zero value limits the amount of memory that process can consume by (accidental) memory leakage.
- - - - - - - - - - - -
The single most important directive is MaxClients as this determines the amount of Apache child processes that will be launched to server requests. A simple calculation for MaxClients would be:
(Total Memory - Critical Services Memory) / Size Per Apache process
I define Critical Services as services such as mySQL, Plesk, Cpanel; any service that is required for proper operation of your server.
I've used the following commands via shell to determine values for Total Memory, OS Memory, MySQL Memory, and Apache Process Size
TOTAL MEMORY
[root@vps httpd]# free -m
total used free shared buffers cached
Mem: 1002 599 402 0 28 337
-/+ buffers/cache: 233 769
Swap: 2047 124 1922
MYSQL MEMORY
[root@vps httpd]# ps aux | grep 'mysql' | awk '{print $6}'
408
21440
704
APACHE PROCESS SIZE
[root@vps httpd]# ps aux | grep 'httpd' | awk '{print $6}'
22468
11552
41492
40868
41120
41696
39488
41704
15552
16076
16084
728
In this case the server has 1002Mb of memory allocated, xx used by the OS itself, 21Mb used by mySQL, and each Apache thread averages about 30Mb.
MaxClients = (1002 - 21) / 30 therefore MaxClients = 32.7
The next important aspect is MaxConnectionsPerChild as this is the amount of threads that will be processed before the child is recycled.
A good calculation for MaxConnectionsPerChild would be:
(total amount of daily requests / total number of daily processes)
Determining these values is a bit more complex as it requires some type of statistics package or thorough knowledge of interpreting Apache access logs.
As this does not adversely effect memory usage, only cpu time to cycle the process if you are unable to determine this information the standard 1000 should be used.
Thus a good configuration for this server would be
<IfModule prefork.c>
StartServers 2
MinSpareServers 3
MaxSpareServers 3
ServerLimit 30
MaxClients 30
MaxRequestsPerChild 1000
</IfModule>
Be sure once you've saved the file to perform a configuration test before restarting Apache.
[root@vps httpd]# service httpd configtest
Syntax OK
[root@vps httpd]# service httpd restart
Stopping httpd: [ OK ]
Starting httpd: [ OK ]
Nguồn: http://www.hosting.com/support/linux/tuning-the-apache-prefork-mpm
The Apache Prefork MPM can be found in the Apache configuration file; usually /etc/httpd/conf/httpd.conf. The default values are...
<IfModule prefork.c>
StartServers 2
MinSpareServers 3
MaxSpareServers 3
ServerLimit 75
MaxClients 75
MaxRequestsPerChild 1000
</IfModule>
Each Directive taken from "http://httpd.apache.org/docs/trunk/mod/mpm_common.html" is detailed below.
- - - - - - - - - - - -
The StartServers directive sets the number of child server processes created on startup. As the number of processes is dynamically controlled depending on the load there is usually little reason to adjust this parameter.
- - - - - - - - - - - -
The MinSpareServers directive sets the desired minimum number of idle child server processes. An idle process is one which is not handling a request. If there are fewer than MinSpareServers idle then the parent process creates new children until satisfies the MinSpareServers setting.
- - - - - - - - - - - -
The MaxSpareServers directive sets the desired maximum number of idle child server processes. An idle process is one which is not handling a request. If there are more than MaxSpareServers idle, then the parent process will kill off the excess processes.
- - - - - - - - - - - -
The ServerLimit directive is only used if you need to set MaxClients higher than 256 (default). Do not set the value of this directive any higher than what you might want to set MaxClients to.
- - - - - - - - - - - -
The MaxClients directive sets the limit on the number of simultaneous requests that will be served. Any connection attempts over the MaxClients limit will normally be queued, up to a number based on the ListenBacklog directive. Once a child process is freed at the end of a different request, the connection will then be serviced.
For non-threaded servers (i.e., prefork), MaxClients translates into the maximum number of child processes that will be launched to serve requests. The default value is 256; to increase it, you must also raise ServerLimit.
- - - - - - - - - - - -
The MaxConnectionsPerChild directive sets the limit on the number of connections that an individual child server process will handle. After MaxConnectionsPerChild connections, the child process will die. If MaxConnectionsPerChild is 0, then the process will never expire.
Setting MaxConnectionsPerChild to a non-zero value limits the amount of memory that process can consume by (accidental) memory leakage.
- - - - - - - - - - - -
The single most important directive is MaxClients as this determines the amount of Apache child processes that will be launched to server requests. A simple calculation for MaxClients would be:
(Total Memory - Critical Services Memory) / Size Per Apache process
I define Critical Services as services such as mySQL, Plesk, Cpanel; any service that is required for proper operation of your server.
I've used the following commands via shell to determine values for Total Memory, OS Memory, MySQL Memory, and Apache Process Size
TOTAL MEMORY
[root@vps httpd]# free -m
total used free shared buffers cached
Mem: 1002 599 402 0 28 337
-/+ buffers/cache: 233 769
Swap: 2047 124 1922
MYSQL MEMORY
[root@vps httpd]# ps aux | grep 'mysql' | awk '{print $6}'
408
21440
704
APACHE PROCESS SIZE
[root@vps httpd]# ps aux | grep 'httpd' | awk '{print $6}'
22468
11552
41492
40868
41120
41696
39488
41704
15552
16076
16084
728
In this case the server has 1002Mb of memory allocated, xx used by the OS itself, 21Mb used by mySQL, and each Apache thread averages about 30Mb.
MaxClients = (1002 - 21) / 30 therefore MaxClients = 32.7
The next important aspect is MaxConnectionsPerChild as this is the amount of threads that will be processed before the child is recycled.
A good calculation for MaxConnectionsPerChild would be:
(total amount of daily requests / total number of daily processes)
Determining these values is a bit more complex as it requires some type of statistics package or thorough knowledge of interpreting Apache access logs.
As this does not adversely effect memory usage, only cpu time to cycle the process if you are unable to determine this information the standard 1000 should be used.
Thus a good configuration for this server would be
<IfModule prefork.c>
StartServers 2
MinSpareServers 3
MaxSpareServers 3
ServerLimit 30
MaxClients 30
MaxRequestsPerChild 1000
</IfModule>
Be sure once you've saved the file to perform a configuration test before restarting Apache.
[root@vps httpd]# service httpd configtest
Syntax OK
[root@vps httpd]# service httpd restart
Stopping httpd: [ OK ]
Starting httpd: [ OK ]
Nguồn: http://www.hosting.com/support/linux/tuning-the-apache-prefork-mpm
Đăng ký:
Bài đăng (Atom)