Re: [WebDNA] Acceleratin Search / Index Performance in a Formum Service
This WebDNA talk-list message is from 2013
It keeps the original formatting.
numero = 110766
interpreted = N
texte = --Apple-Mail=_A928A5A1-7D0B-431C-BAC5-F0562B5101CFContent-Transfer-Encoding: quoted-printableContent-Type: text/plain;charset=windows-1252Hi Christophe,Current server is: unix-Macintosh OSX 64bits FastCGI version 7.1.731Only commit databases to disk when instructed is selected.I find it hard to split the DB up into many category DB's. I think I =need some better guidance here when for instance doing a search in all =at the same time and sorting on eg. Date/PalleOn 02/10/2013, at 10.10, christophe.billiottet@webdna.us wrote:> Hi Palle! you do not specify what version of WebDNA you are using. You =might remember that the server version (6.2), even when the =configuration preferences shows "Commit Databases: Only commit databases =to disk when instructed", WebDNA will anyway writes your base to disk: =264MB takes time to write.>=20> In this case, there is the solution to split your database: let's say =by categories. If you have about 20 categories, you could split your =database in 20 parts. If you want to enter a new item in a specific =category, then a short code will point WebDNA at the proper database, =that will take about 20 times faster to update. I used this method to =build a 14 million record database and got fast results: using the first =letter of each customer name and 26 databases=85 this method is used by =Google in their search algorithm: your request first hits a server that ="knows" which other server to search for an answer.>=20>=20> Another point is WebDNA FastCGI: WebDNA FastCGI writes to RAM if you =check "Only commit databases to disk when instructed", which is much =faster.>=20> - chris>=20>=20>=20> On Oct 2, 2013, at 10:41 AM, Palle Bo Nielsen =
wrote:>=20>> Hi all,>>=20>> Looking for advise to optimise a webpage using WebDNA and large DB's.>>=20>> I have built a forum entirely in WebDNA and the DB's are getting big. =The main DB has the following characteristics=85>>=20>> - Columns: 11>> - Rows: 847.121>> - Size: 264 MB>>=20>> If I display the different generic topics within the Forum and the =most recent Post within each category etc. then the code is executing =quite fast (acceptable performance).>>=20>> But if I want to add the value of posts within each category or =within each thread then the performance is slowing the too much.>>=20>> I have a category DB which eventually show 25 topics on the main page =of the forum, then I do a search into the main DB and search for all =posts with this categories ID and show the numFound. This is repeated =for every unique Topic.>>=20>>=20>>=20>> << Searching for the Topics >>>> [search =db=3D/forum/db/db1.db&eqdb1_publishdata=3D1&asdb1_prioritysort=3D1&db1_pri=oritytype=3Dnum&AllReqd=3DT][founditems]>>=20>> << Searching for the numFound Value - This One is Pulling a Lot of =Performance even though it's simple, but there is a lot of data to =search through >>>> [search =db=3D/forum/db/db4.db&eqdb4_publishdata=3D1&eqdb4_db1data=3D[db1_sku]&AllR=eqd=3DT][numFound][founditems][/founditems][/search]>>=20>>=20>>=20>> I would appreciate any good advise or ideas on how to accelerate the =performance. Some kind of indexing to avoid live searching would be =great, but how?>>=20>> /Palle>>=20>> --------------------------------------------------------->> This message is sent to you because you are subscribed to>> the mailing list .>> To unsubscribe, E-mail to: >> archives: http://mail.webdna.us/list/talk@webdna.us>> Bug Reporting: support@webdna.us>=20--Apple-Mail=_A928A5A1-7D0B-431C-BAC5-F0562B5101CFContent-Transfer-Encoding: quoted-printableContent-Type: text/html;charset=windows-1252Hi Christophe,
Current server =is: unix-Macintosh OSX 64bits =FastCGI version 7.1.731
Only commit databases to disk =when instructed is selected.
I find it hard to split the DB up into many =category DB's. I think I need some better guidance here when for =instance doing a search in all at the same time and sorting on eg. =Date
/Palle
Hi Palle! you do not specify what version of WebDNA you =are using. You might remember that the server version (6.2), even when =the configuration preferences shows "Commit Databases: Only commit =databases to disk when instructed", WebDNA will anyway writes your base =to disk: 264MB takes time to write.
In this case, there is the =solution to split your database: let's say by categories. If you have =about 20 categories, you could split your database in 20 parts. If you =want to enter a new item in a specific category, then a short code will =point WebDNA at the proper database, that will take about 20 times =faster to update. I used this method to build a 14 million record =database and got fast results: using the first letter of each customer =name and 26 databases=85 this method is used by Google in their search =algorithm: your request first hits a server that "knows" which other =server to search for an answer.
Another point is WebDNA =FastCGI: WebDNA FastCGI writes to RAM if you check "Only commit =databases to disk when instructed", which is much faster.
- =chris
On Oct 2, 2013, at 10:41 AM, Palle Bo Nielsen =<powerpalle@powerpalle.dk> =wrote:
Hi all,
Looking for =advise to optimise a webpage using WebDNA and large DB's.
I have =built a forum entirely in WebDNA and the DB's are getting big. The main =DB has the following characteristics=85
- Columns: 11
- Rows: =847.121
- Size: 264 MB
If I display the different generic =topics within the Forum and the most recent Post within each category =etc. then the code is executing quite fast (acceptable =performance).
But if I want to add the value of posts within each =category or within each thread then the performance is slowing the too =much.
I have a category DB which eventually show 25 topics on the =main page of the forum, then I do a search into the main DB and search =for all posts with this categories ID and show the numFound. This is =repeated for every unique Topic.
<< Searching for =the Topics >>
[search =db=3D/forum/db/db1.db&eqdb1_publishdata=3D1&asdb1_prioritysort=3D1=&db1_prioritytype=3Dnum&AllReqd=3DT][founditems]
<< =Searching for the numFound Value - This One is Pulling a Lot of =Performance even though it's simple, but there is a lot of data to =search through >>
[search =db=3D/forum/db/db4.db&eqdb4_publishdata=3D1&eqdb4_db1data=3D[db1_s=ku]&AllReqd=3DT][numFound][founditems][/founditems][/search]
I would appreciate any good advise or ideas on how to accelerate =the performance. Some kind of indexing to avoid live searching would be =great, but =how?
/Palle
------------------------------------------------=---------
This message is sent to you because you are subscribed =to
the mailing list <talk@webdna.us>.
To =unsubscribe, E-mail to: <talk-leave@webdna.us>
archi=ves: http://mail.webdna.us/l=ist/talk@webdna.us
Bug Reporting: support@webdna.us
=--Apple-Mail=_A928A5A1-7D0B-431C-BAC5-F0562B5101CF--
Associated Messages, from the most recent to the oldest:
--Apple-Mail=_A928A5A1-7D0B-431C-BAC5-F0562B5101CFContent-Transfer-Encoding: quoted-printableContent-Type: text/plain;charset=windows-1252Hi Christophe,Current server is: unix-Macintosh OSX 64bits FastCGI version 7.1.731Only commit databases to disk when instructed is selected.I find it hard to split the DB up into many category DB's. I think I =need some better guidance here when for instance doing a search in all =at the same time and sorting on eg. Date/PalleOn 02/10/2013, at 10.10, christophe.billiottet@webdna.us wrote:> Hi Palle! you do not specify what version of WebDNA you are using. You =might remember that the server version (6.2), even when the =configuration preferences shows "Commit Databases: Only commit databases =to disk when instructed", WebDNA will anyway writes your base to disk: =264MB takes time to write.>=20> In this case, there is the solution to split your database: let's say =by categories. If you have about 20 categories, you could split your =database in 20 parts. If you want to enter a new item in a specific =category, then a short code will point WebDNA at the proper database, =that will take about 20 times faster to update. I used this method to =build a 14 million record database and got fast results: using the first =letter of each customer name and 26 databases=85 this method is used by =Google in their search algorithm: your request first hits a server that ="knows" which other server to search for an answer.>=20>=20> Another point is WebDNA FastCGI: WebDNA FastCGI writes to RAM if you =check "Only commit databases to disk when instructed", which is much =faster.>=20> - chris>=20>=20>=20> On Oct 2, 2013, at 10:41 AM, Palle Bo Nielsen = wrote:>=20>> Hi all,>>=20>> Looking for advise to optimise a webpage using WebDNA and large DB's.>>=20>> I have built a forum entirely in WebDNA and the DB's are getting big. =The main DB has the following characteristics=85>>=20>> - Columns: 11>> - Rows: 847.121>> - Size: 264 MB>>=20>> If I display the different generic topics within the Forum and the =most recent Post within each category etc. then the code is executing =quite fast (acceptable performance).>>=20>> But if I want to add the value of posts within each category or =within each thread then the performance is slowing the too much.>>=20>> I have a category DB which eventually show 25 topics on the main page =of the forum, then I do a search into the main DB and search for all =posts with this categories ID and show the numFound. This is repeated =for every unique Topic.>>=20>>=20>>=20>> << Searching for the Topics >>>> [search =db=3D/forum/db/db1.db&eqdb1_publishdata=3D1&asdb1_prioritysort=3D1&db1_pri=oritytype=3Dnum&AllReqd=3DT][founditems]>>=20>> << Searching for the numFound Value - This One is Pulling a Lot of =Performance even though it's simple, but there is a lot of data to =search through >>>> [search =db=3D/forum/db/db4.db&eqdb4_publishdata=3D1&eqdb4_db1data=3D[db1_sku]&AllR=eqd=3DT][numFound][founditems][/founditems][/search]>>=20>>=20>>=20>> I would appreciate any good advise or ideas on how to accelerate the =performance. Some kind of indexing to avoid live searching would be =great, but how?>>=20>> /Palle>>=20>> --------------------------------------------------------->> This message is sent to you because you are subscribed to>> the mailing list .>> To unsubscribe, E-mail to: >> archives: http://mail.webdna.us/list/talk@webdna.us>> Bug Reporting: support@webdna.us>=20--Apple-Mail=_A928A5A1-7D0B-431C-BAC5-F0562B5101CFContent-Transfer-Encoding: quoted-printableContent-Type: text/html;charset=windows-1252Hi Christophe,
Current server =is: unix-Macintosh OSX 64bits =FastCGI version 7.1.731
Only commit databases to disk =when instructed is selected.
I find it hard to split the DB up into many =category DB's. I think I need some better guidance here when for =instance doing a search in all at the same time and sorting on eg. =Date
/Palle
Hi Palle! you do not specify what version of WebDNA you =are using. You might remember that the server version (6.2), even when =the configuration preferences shows "Commit Databases: Only commit =databases to disk when instructed", WebDNA will anyway writes your base =to disk: 264MB takes time to write.
In this case, there is the =solution to split your database: let's say by categories. If you have =about 20 categories, you could split your database in 20 parts. If you =want to enter a new item in a specific category, then a short code will =point WebDNA at the proper database, that will take about 20 times =faster to update. I used this method to build a 14 million record =database and got fast results: using the first letter of each customer =name and 26 databases=85 this method is used by Google in their search =algorithm: your request first hits a server that "knows" which other =server to search for an answer.
Another point is WebDNA =FastCGI: WebDNA FastCGI writes to RAM if you check "Only commit =databases to disk when instructed", which is much faster.
- =chris
On Oct 2, 2013, at 10:41 AM, Palle Bo Nielsen =<powerpalle@powerpalle.dk> =wrote:
Hi all,
Looking for =advise to optimise a webpage using WebDNA and large DB's.
I have =built a forum entirely in WebDNA and the DB's are getting big. The main =DB has the following characteristics=85
- Columns: 11
- Rows: =847.121
- Size: 264 MB
If I display the different generic =topics within the Forum and the most recent Post within each category =etc. then the code is executing quite fast (acceptable =performance).
But if I want to add the value of posts within each =category or within each thread then the performance is slowing the too =much.
I have a category DB which eventually show 25 topics on the =main page of the forum, then I do a search into the main DB and search =for all posts with this categories ID and show the numFound. This is =repeated for every unique Topic.
<< Searching for =the Topics >>
[search =db=3D/forum/db/db1.db&eqdb1_publishdata=3D1&asdb1_prioritysort=3D1=&db1_prioritytype=3Dnum&AllReqd=3DT][founditems]
<< =Searching for the numFound Value - This One is Pulling a Lot of =Performance even though it's simple, but there is a lot of data to =search through >>
[search =db=3D/forum/db/db4.db&eqdb4_publishdata=3D1&eqdb4_db1data=3D[db1_s=ku]&AllReqd=3DT][numFound][founditems][/founditems][/search]
I would appreciate any good advise or ideas on how to accelerate =the performance. Some kind of indexing to avoid live searching would be =great, but =how?
/Palle
------------------------------------------------=---------
This message is sent to you because you are subscribed =to
the mailing list <talk@webdna.us>.
To =unsubscribe, E-mail to: <talk-leave@webdna.us>
archi=ves: http://mail.webdna.us/l=ist/talk@webdna.us
Bug Reporting: support@webdna.us
=--Apple-Mail=_A928A5A1-7D0B-431C-BAC5-F0562B5101CF--
Palle Bo Nielsen
DOWNLOAD WEBDNA NOW!
Top Articles:
Talk List
The WebDNA community talk-list is the best place to get some help: several hundred extremely proficient programmers with an excellent knowledge of WebDNA and an excellent spirit will deliver all the tips and tricks you can imagine...
Related Readings:
E-mailer error codes (1997)
BinaryBody for ReturnRaw (2003)
[TaxableTotal] - not working with AOL and IE (1997)
Help! WebCat2 bug (1997)
Help! WebCat2 bug (1997)
Authenticate (1997)
Secure server question (1997)
WCS Newbie question (1997)
[format xs] freeze (1997)
Error:Too many nested [xxx] contexts (1997)
attn: smitmicro - cart limitation (2002)
New command suggestion (1997)
searchable list archive (1997)
Webcat/javascript interactive pulldowns Q (2002)
Problems with webcat 2.01 for NT (1997)
Dates (1996)
normal users.db calls ... (1998)
SSL do I need it?? (1998)
shipping costs (1997)
Help me... (1998)