This topic is locked

How to deal with many records

9/14/2007 1:19:31 PM
PHPRunner General questions
D
darkstorm author

Hello all,

I am confrunting with the following situattion.

I have inserted into Mysql records of up to 90.000+ . The problems is that it works really slow, and i was wondering if there is something i can do to somehow optimize the application so that will not create such a big load on the server.
I am considering upgrading the server to a more powerfull one (hopefully a dual Xeon will do the trick) but i want to make this choice as a last resort, and see if something from the application can be done..
Any help/advices will be highly appreciated.
Thank you.

Alexey admin 9/14/2007

Hi,
90k records is not too big for MySQL.

Please clarify what kind of speed issues you experiencing, when and where.

And I'll try to help you.

D
darkstorm author 9/14/2007

Hi,

90k records is not too big for MySQL.

Please clarify what kind of speed issues you experiencing, when and where.

And I'll try to help you.


Hello Alexey,

The problem appears when i access the _list.php page. It doesnt really matter how many records i select to show (for example 20) the page loads very slowly, because of the 90k records that i have. When i click to edit a record it runs smoothly and the update of a record is ok. Once i click again to view the _list.php page ... that loads up the server consuming 99% of the CPU.
I have other tables like _users_list.php that contains few records, that loads up just fine so my guess would be that the problem is the original query where the script looks for the records and it takes long because of the amount of data.
Any suggestions?

R
rafamarquez 9/14/2007

My sugestion is ( In the Edit SQL Query Page ) Mark the option ( No records on the first page ) it will force the user to do a search

and restrict the amount of data to retrieve.
I have 4.5 million of rows and work very well.
In MYSQL you can use the LIMIT function and order by LAST RECORDS etc.
Hugo Rafael - PostgreSQL - Paraguay.

D
darkstorm author 9/14/2007

Hello Hugo, and thank you for your input.
I have done the restriction, and alltough it will seem that it's a bit faster actually it's not.
When a user performs a search the result would still be around 2-5k of records...

I have done a more complicated search myself so that a fewer records would come of , around 450 , and it still acts the same.
CPU around 90%... and an average page load of 7-9 seconds.
Considering that i was the only one doying the query at the moment, which will not be the case in an tipical day.
Also i think i would share this with you , i am using a Intel® Celeron® CPU 2.66GHz with 512MB physical memory
Thanks

R
rafamarquez 9/14/2007

In my case I have a Dual Xeon 3.2 - 2GB RAM - 4 discs SATA 7200RPM 16MB cache

I have created 2 table_spaces ( 1 for data 1 for indexes)

The OS is in de first disc

1 table_space (data) is in de second disc and second table_space is in the 3º disc

In the 4º disc I have de phps files and other programs files.
The other thing I do is tunning the PostgreSQL to put de Indexes and Querys in cache (MEMORY RAM)
I Think do you have to buy more RAM and do you have to install MYSQL Administrator www.mysql.com and do a Tunnig to MYSQL consume more RAM to have more speed.
Hugo Rafael - Paraguy - PostgreSQL

R
rafamarquez 9/14/2007

I think do you have to tunning the MYSQL to consume more RAM whit MYSQL Administrator download it from www.mysql.com. do you have to considere the amount of RAM do you actually have and the type of DB MYISAM or InnoDB.
If you have Windows Server or XP and you will have a lot of user you have to considere to buy more RAM or better discs.

R
rafamarquez 9/14/2007

Reeding one more time your Post, I think! Do you have indexes in you tables?? because it seems to be a FULL TABLE SCAN, because do you made a search and no speed up. Do you have to build Indexes for every column where do you wil make a search.

Alexey admin 9/14/2007

Balanel,
do you have ORDER BY defined on the Edit SQL query tab for your table?

In this case I recommend you to build table index on ORDER BY fields
If you use complex query indexes would help too.
You can publish your project on Demo account and send a link to it to support@xlinesoft.com and I'll help you to build indexes.

D
darkstorm author 9/14/2007

Thank you all for replying.
Hello,
I have built indexes for the columns the user will use in a search in mysql. Do not know if i need to edit something in the script about that.I have also selected the order by option.
No significant improvement that i have noticed :| ... trying to implement a script to see how much does it really take to make the query to the database....

I will come with more updates tomorrow..
Again thank you all for replying and trying to help out.
Kindest Regards

A
alang 9/16/2007

I have a similar situation where one of the tables is around 150,000 records. I found the following article was quite useful in regard to indexes etc. ( http://www.databasejournal.com/features/my...cle.php/1382791 )
Some suggestions in summary:

  • create indexes on any field used in your query (except those fields where many of the values are the same - improvement in speed is negligible)
  • keep the complexity of the main SQL query to a minimum - only use inner joins where you have to be able to search on a field in a linked table)
  • don't specify order by - let the user do that if they want to - hopefully after refining the search.

Alexey admin 9/17/2007

Balanel,
You can publish your project on Demo account and send a link to it to support@xlinesoft.com and I'll help you to optimize it.
The "Publish on Demo account" button is on the last tab of PHPRunner.

D
darkstorm author 10/2/2007

Hello again,

I come to you with a little update.

I have upgraded the server to a P4 @ 3.4Ghz with 1GB DDR2 RAM and a 16MB buffer HDD.
alltough the query times have improved a bit i feel that there is more i can do to optimize the script.
I believe the issue is that i on every search i make the output is somewhere large, ussually 3-4K or records, see below:
mysql> EXPLAIN select * from _clienti where oras = 'brasov' and situatii = 'nu a fost contactat';

+----+-------------+----------+------+-----------------------------+---------------+---------+-------------+------+-------------+

| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |

+----+-------------+----------+------+-----------------------------+---------------+---------+-------------+------+-------------+

| 1 | SIMPLE | _clienti | ref | situatii_oras,situatii,oras | situatii_oras | 106 | const,const | 4240 | Using where |

+----+-------------+----------+------+-----------------------------+---------------+---------+-------------+------+-------------+

1 row in set (0.00 sec)
As you can see i have created indexes for the columns and the script does not search trough the entire table. However this still takes around 5-6 seconds to output and that's when only 1 user is stressing the CPU..
Any more advices... are appreciated..

Regards
PS:

The other thing I do is tunning the PostgreSQL to put de Indexes and Querys in cache (MEMORY RAM)


How can i accomplish this?

A
acpan 10/5/2007

mysql> EXPLAIN select from _clienti where oras = 'brasov' and situatii = 'nu a fost contactat';

+----+-------------+----------+------+-----------------------------+---------------+---------+-------------+------+-------------+

| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |

+----+-------------+----------+------+-----------------------------+---------------+---------+-------------+------+-------------+

| 1 | SIMPLE | _clienti | ref | situatii_oras,situatii,oras | situatii_oras | 106 | const,const | 4240 | Using where |

+----+-------------+----------+------+-----------------------------+---------------+---------+-------------+------+-------------+

1 row in set (0.00 sec)
Hi, I learnt from this interesting post about optimising MySQL with index. I applied to my database which is around 1 million record.

I used Explain select, before optimising (creating index on Where conditions fields), here is the observation:
Before creating indiex, Result of Rows = 62010 on the Explain output.

After: Results: 4 rows.
It's tremendous improvement in speed and i suppose some of my server load comes from un-optimised Select.

I have many SQL Selects with complex where conditions which run in short duration, some a few seconds interval.

I need to track all un-obtimised SQL Selects so I further turn on the MYSQL flag through the MySQL GUI,

to log the Slow SQL, and restart the SQL Server.
After a while, i went through the SQL Log Slow Query Screen section in the MySQL GUI which captures slow SQL

and went on create indexes for those captured. Again the Results reduced and speed improved significantly.
Your output seems to say that the row is 4240 and the key len is quite long 106 which should be shorter accordingly

to the document i read. make the key shorter may help. Accordingly to the document, you can make the key length

shorter, by putting the number characters to match, as you do not need to match the whole string of the field for

examaple 106 characters which may be the field lenght of your field.
Example:
Your statement: select
from _clienti where oras = 'brasov' and situatii = 'nu a fost contactat';

oras field or situatii field lenght may be defined by you as 106 characters.

But for indexing, you may not need the whole of the field in order to sort. may be 30 will do.
Example, if you have contents like this for oras,
abc

abcde

abcdef

abcdefghijk
then set the key to 3 will return 4 rows, set to 4 return 3 rows, set to 6 return 2 rows.

So in order to make it efficient, do not set to 3, as it will have to search all the 4 rows one by one and from there apply the condition to

find the final row.
seem to me setting to 6 would be good for this exmaple, it will look through only 2 rows then from the 2 rows, apply the where condition

and find the final result.
Of course you can set as long as you want for your case but try to limit, exmaple if you estimate most of the records

will be unique when using 20 characters then then set the key length to 20 when defining indexes.
My experience is also CPU or memory when hitting close to limit or busy most of the time, will

have very bad impact to the SQL, you need to see the CPU and memory, are util very high consistenly,

what process cause it so high. If suspect process are legitimate process, then likelyhood is

you need to upgrade the CPU and memory. My experience on the same system after i upgrade

CPU to dual and memory to 2 GB, was also very obvious in improvememt few months back.
I am not expert, just to share what applied by learning from this post.
Good Luck.

Dex

A
acpan 10/5/2007

Just to add:
Here's how i turn on the MYSQL ADMIN GUI Log:
Start Up vairables:

Slow Queries Log = On

Long Query time = 2

Log queries that don't use index = On

Expire log days = ON and set to 2.
Restart the SQL Service
After some time or go to your web site and do some refresh or query,

then go to GUI:
Server Logs -> Slow Queries

It will show you what are the slow SELECT statements and how many rows return.
Then create the indexes on the select statement where condition and order by field.
Good luck.

Dex

D
darkstorm author 10/6/2007

I have managed to tweak the indexes a little bit and now it shows some improvement
mysql> EXPLAIN select * from _clienti where oras = 'brasov' and situatii = 'nu a fost contactat';

+----+-------------+----------+------+-----------------------------+---------------+---------+-------------+------+-------------+

| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |

+----+-------------+----------+------+-----------------------------+---------------+---------+-------------+------+-------------+

| 1 | SIMPLE | _clienti | ref | oras,situatii,situatii_oras | situatii_oras | 18 | const,const | 1220 | Using where |

+----+-------------+----------+------+-----------------------------+---------------+---------+-------------+------+-------------+

1 row in set (0.00 sec)

A
acpan 10/6/2007

Glad to hear that and notice your key length is now shortened from 106 to 18.
Thanks to AlanG for sharing the link which is very useful.
Rgds

Dex