Re: [WebDNA] Not even sure what to ask for help on. . . :(

This WebDNA talk-list message is from

2008


It keeps the original formatting.
numero = 101642
interpreted = N
texte = Just a question: do you have real bandwidth with a full 6Mbps or do you just have access to a shared network on which you can pump up to 6Mbps? also, did you consider the uploading stream? downloading at 5.5Mbps easily means 500Kbps upstream... chris On Dec 16, 2008, at 4:46, David Bastedo wrote: > Absolutely. So let me explain a bit more. . . > > > 1. Under low load, everything has been working fine. > 2. I have two pages where I expected a slower load time as a result of > the searchs, what I am experiencing though.. . 30 plus seconds to > load the framepage with one search, then 30 seconds each to load two > more Iframe pages. > > There are numerous searches and lookups, but when you go to a load > time like that from a normal load time . . . I was developing on the > same server, so it was under normal load. > > 3. My databases are in ram and they have a few records, but nothing I > would say, exceptional nothing with 100,000's of records. > 4. the last site ran most of the same code, and on peak periods, I > have had to tweak the server to max out the load and connections - but > those were pretty extreme conditions and I basically offloaded images > and a key page. > 5. My last version, I offloaded all the graphics to another server. On > this version, I have offloaded most - menu and footer to another > server and then have integrated a flickr api into pages, so those > images are coming off flicker, but I built a cms, so those images - > thumbs etc. are on the main server and the store. > > 7. For example, loading my store right now, just took several seconds, > and on thursday it was instantaneous. Currently, a category search > with two products is taking several seconds to load and I am sure I > would have noticed that:) > 8. I am looking at my total bandwidth for the day, and it peaked at > 5.5 briefly. It's running just over 3mbs right now > > to update on Ken's note > > the entire site is in Iframes. All big chunks of code are split up > onto individual pages. > I commit to db. only when necessary > My pages all have .html extension > > my fickr api was getting 1.1 qps > > the "key" pages in question cannot be static, the rest I was > planning on doing > > apache is running 2.3 qps right now, but was higher earlier > > now, when you say a permenant db, are you suggesting that I compile > all of my results from the searches and lookup's into one database. . > . . beforehand . . . i did that on the last version for a bunch of > things and found it to be such a pain in the ass. I wrote a routine to > write a new db everytime I updated . . . I am not sure I CAN do that > for parts of the site as one is update constantly > > > A search for two items in the store is taking seconds - and it is in > iframes as well > > I am currently going through and replacing lookip's with searches > where possible, if there is more than 2-3, I think I can combine, I > will > > I cache templates, but have changed the value as per your suggestion > Safe > > as for the safe write - how would this affect the stability of my > databases? as I have had no problems with them in 3+ years of this > site and don't want to start > > I want to think it is bandwidth, as I think that would be an easier > solution for me, but, if I haven't hit that magic number of 6mbs, even > though I have been close, can it be the bandwidth. . . . > > my other "real" issue, is that I haven't annouced the site yet. When I > do, I will be telling a lot of people. . . so my problems get worse > from here. a "slight" improvement under the current load, won't really > satisfy my expectations for the peak . . . as well, this is the > beginning of a year plus endeavour, of which, if all goes right, I'll > be increasing traffic, not decreasing. I am quite confident, that I > can get a tap big enough to satisfy and that my ISP has that > capability > > what I can't figure out, is how to get this site moving fast enough > to keep up. I think I might have to break down the pieces i Have > grouped together on pages, into individual pages. > > Since I Started writing this an hour ago and checking various things, > the traffic has been cut in half, and now looks like this: > > 1.1 requests/sec - 14.2 kB/second - 12.9 kB/request > 149 requests currently being processed, 10 idle workers > WWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW > WWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW > WWWWWWWWWWWWWWWWWW_W___WW______................................. > ................................................................ > ................................................................ > ................................................................ > ................................................................ > .. > > > > and in my experience, on my old site, when it looked liket his I was > still ok and the only thing I did differently, was to use tables to > compile the all the results for my main search instead of writing to a > db, which I used to do exclusively. > > D. > > > > > On Mon, Dec 15, 2008 at 11:16 PM, Rob wrote: >> Ok.. sorry, but I gotta be brutally honest here... its just me, and >> nothing >> personal. I just want to try to understand the problem.... From your >> previous email you said: >> >> "Ok, so I just did a soft launch of a site on Friday and my site >> traffic jumped over 200%. Normally, that would be great, except the >> site has now slowed to a crawl." >> >> Which to me means that under normal conditions, low load, your >> lookups are >> working fine, and everything functions normally. If it was a >> problem in the >> coding, you would also see it in low load conditions as well... >> yes/no?....Lookups/searches, using WebDNA are normally extremely >> fast. Way >> faster then the available bandwidth as they are usually done in >> RAM, unless >> your pulling them via SQL or using an old 386/system 6 processor. >> Pulling >> from RAM means it doesn't have to read from disk, and I wouldn't >> use SQL >> unless I had several thousand records anyway. >> >> IMHO, I still think it's bandwidth. On a 6 mb/sec line you might >> max out, >> and start grinding to a halt at about 23 connections(avg 256kbs per >> connection) pulling streaming all at once. >> >> I currently have to split out loads for the same reason. I park all >> intensive loads on a high bandwidth network, and use our servers, >> on a >> completely separate network, to just serve out the code. It also >> has the >> advantage of differentiating between a coding/bandwidth problem. I >> actually >> purchase space on the backbone for the same reason for about $3.00- >> $4.00 per >> month/site from directnic. >> >> Just my 2 cents.... >> >> Rob >> >> >> On 15-Dec-08, at 7:00 PM, David Bastedo wrote: >> >>> I stream through the same pipe and can handle up to 6mb a second - >>> which I have come close to, but not quite attained. The max I have >>> hit >>> in the last week is 5.5. The a/v is streaming fine - though that >>> server is also maxed out and is being updated and reconfigured. That >>> has been rectified temporarily - there is a memory leak in Flash Com >>> Server, though my connection can handle several hundred streaming >>> conncetions. >>> >>> I did do a test and am spending my night doing more. One culpret is >>> the looksups in that search. Doing a nested search is way faster. I >>> hope to go through all the major chunks and see what I can >>> streamline. >>> >>> I'll post some results in a few hours if that helps of side by side >>> tests, which is pretty well what I need to do. >>> >>> D. >>> >>> On Mon, Dec 15, 2008 at 8:25 PM, Rob wrote: >>>> >>>> Sounds more like a bandwidth problem then a WebDNA problem.... >>>> What kind >>>> of >>>> line is this on? >>>> >>>> Rob >>>> >>>> >>>> On 15-Dec-08, at 2:59 PM, David Bastedo wrote: >>>> >>>>> Ok, so I just did a soft launch of a site on Friday and my site >>>>> traffic jumped over 200%. Normally, that would be great, except >>>>> the >>>>> site has now slowed to a crawl. >>>>> >>>>> I have many images on a seperate server, I have just added 6gb >>>>> to the >>>>> server - emrgency like, hoping it will help - it has - >>>>> marginally - >>>>> and now I am in the process of adding a third server - I also >>>>> have one >>>>> for streaming - and am planning on moving everything to MySQL - I >>>>> think - though it would not be my preference. >>>>> >>>>> Anyway before I can even contemplate that - doing that will take a >>>>> fair bit of time - I need to get the current site as fast as >>>>> possible, >>>>> to buy me some time to do this new update. >>>>> >>>>> I guess my biggest question is on tables. I am using tables on >>>>> this >>>>> site and I think that this may be the biggest issue. I need to >>>>> do a >>>>> lot of sorting and it "seemed" like the best, most convinient >>>>> way to >>>>> do it, though now I am wondering if this has caused way more >>>>> problems >>>>> that it has solved. >>>>> >>>>> Is it better to write to a temp db and then sort those results, >>>>> if I >>>>> have to, rather than a table: >>>>> >>>>> Here is a sample piece of code. (I am making custom music >>>>> playlists BTW) >>>>> >>>>> [table >>>>> >>>>> name=MyPlayListData&fields=PlayListItemID,PlayListID,Sequence,ConcertID,FLV_FileName,UserID,DateCreated,LastUpdate,FLV_Length,PlayListName,PlayListDescription,AlbumID,HSPSongID,PlayListID,PlayListDescription,UserID,PlayListType,DateCreated,timeTotal,MySongName,AlbumName,releaseDate,rating,MyRating][/table] >>>>> >>>>> >>>>> [Search >>>>> >>>>> db=[pagePath]databases/ >>>>> aaa >>>>> .db >>>>> &gePlayListIDdata >>>>> = >>>>> 0 >>>>> &eqAlbumIDdata >>>>> =303&albumIDtype=num&[SO]sort=1&[SO]dir=[SB]&[SO]Type=num] >>>>> [founditems] >>>>> [replace >>>>> >>>>> table >>>>> = >>>>> MyPlayListData >>>>> &eqPlayListIDdatarq >>>>> = >>>>> [PlayListID >>>>> ]&PlayListIDtype >>>>> = >>>>> num >>>>> &eqUserIDdatarq >>>>> =[UserID]&UserIDtype=num&eqHSPSongIDdatarq=[HSPSongID]&append=T] >>>>> [!] >>>>> [/!]PlayListItemID=[PlayListItemID][!] >>>>> [/!]&PlayListID=[PlayListID][!] >>>>> [/!]&Sequence=[Sequence][!] >>>>> [/!]&ConcertID=[ConcertID][!] >>>>> [/!]&FLV_FileName=[FLV_FileName][!] >>>>> [/!]&UserID=[UserID][!] >>>>> [/!]&DateCreated=[DateCreated][!] >>>>> [/!]&LastUpdate=[LastUpdate][!] >>>>> [/!]&FLV_Length=[FLV_Length][!] >>>>> [/!]&PlayListName=[PlayListName][!] >>>>> [/!]&PlayListDescription=[PlayListDescription][!] >>>>> [/!]&AlbumID=[LOOKUP >>>>> >>>>> >>>>> db=[pagePath]databases/ >>>>> yyy >>>>> .db >>>>> &value=[PlayListID]&lookInField=PlayListID&returnField=AlbumID][!] >>>>> [/!]&HSPSongID=[HSPSongID][!] >>>>> [/!]&PlayListName=[LOOKUP >>>>> >>>>> >>>>> db=[pagePath]databases/ >>>>> yyy >>>>> .db >>>>> &value >>>>> =[PlayListID]&lookInField=PlayListID&returnField=PlayListName][!] >>>>> [/!]&PlayListDescription=[LOOKUP >>>>> >>>>> >>>>> db=[pagePath]databases/ >>>>> yyy >>>>> .db >>>>> &value >>>>> = >>>>> [PlayListID >>>>> ]&lookInField=PlayListID&returnField=PlayListDescription][!] >>>>> [/!]&UserID=[LOOKUP >>>>> >>>>> >>>>> db=[pagePath]databases/ >>>>> yyy >>>>> .db&value=[PlayListID]&lookInField=PlayListID&returnField=UserID] >>>>> [!] >>>>> [/!]&PlayListType=[LOOKUP >>>>> >>>>> >>>>> db=[pagePath]databases/ >>>>> yyy >>>>> .db >>>>> &value >>>>> =[PlayListID]&lookInField=PlayListID&returnField=PlayListType][!] >>>>> [/!]&DateCreated=[LOOKUP >>>>> >>>>> >>>>> db=[pagePath]databases/ >>>>> yyy >>>>> .db >>>>> &value >>>>> =[PlayListID]&lookInField=PlayListID&returnField=DateCreated][!] >>>>> [/!]&rating=[LOOKUP >>>>> >>>>> >>>>> db=[pagePath]databases/ >>>>> yyy >>>>> .db >>>>> &value >>>>> = >>>>> [PlayListID >>>>> ]&lookInField=PlayListID&returnField=rating]&MyRating=[search >>>>> >>>>> >>>>> db=[pagePath]databases/ >>>>> xxx.db&eqStoryIDdatarq=[PlayListID]&eqUserIDdatarq=[GETCOOKIE >>>>> name=xxx]][founditems][TheRating][/founditems][/search][/replace] >>>>> >>>>> [/founditems] >>>>> [/search] >>>>> >>>>> -> then I have to do two more seraches. One for the results and >>>>> one >>>>> for next/prev >>>>> >>>>> [search >>>>> >>>>> table >>>>> = >>>>> MyPlayListData >>>>> &gePlayListIDData >>>>> = >>>>> 0 >>>>> &eqalbumIDdatarq >>>>> = >>>>> 303 >>>>> &PlayListIDsumm >>>>> = >>>>> T >>>>> &[SB >>>>> ]sort=1&[SB]sdir=[SO]&[SB]type=[SB_type]&startAt=[startat]&max=10] >>>>> >>>>> >>>>> I know I can make this code more streamlined, but I am not sure >>>>> if it >>>>> is the tables that are a problem. >>>>> >>>>> Without a load, these pages work great, but with the increased >>>>> traffic, it now takes - well WAY too long to load a page. >>>>> Anyway, I am >>>>> going through and make my code thinner, as it were - I can get >>>>> rid of >>>>> a bunch of the lookups above and replace with another search, >>>>> but I am >>>>> wondering if I should replace all the tables in the site with a >>>>> temp >>>>> .db. >>>>> >>>>> Any thoughts? or advice? Thanks in advance. >>>>> >>>>> D. >>>>> >>>>> >>>>> -- >>>>> David Bastedo >>>>> Ten Plus One Communications Inc. >>>>> http://www.10plus1.com >>>>> 416.603.2223 ext.1 >>>>> --------------------------------------------------------- >>>>> This message is sent to you because you are subscribed to >>>>> the mailing list . >>>>> To unsubscribe, E-mail to: >>>>> archives: http://mail.webdna.us/list/talk@webdna.us >>>>> old archives: http://dev.webdna.us/TalkListArchive/ >>>> >>>> --------------------------------------------------------- >>>> This message is sent to you because you are subscribed to >>>> the mailing list . >>>> To unsubscribe, E-mail to: >>>> archives: http://mail.webdna.us/list/talk@webdna.us >>>> old archives: http://dev.webdna.us/TalkListArchive/ >>>> >>> >>> >>> >>> -- >>> David Bastedo >>> Ten Plus One Communications Inc. >>> http://www.10plus1.com >>> 416.603.2223 ext.1 >>> --------------------------------------------------------- >>> This message is sent to you because you are subscribed to >>> the mailing list . >>> To unsubscribe, E-mail to: >>> archives: http://mail.webdna.us/list/talk@webdna.us >>> old archives: http://dev.webdna.us/TalkListArchive/ >> >> --------------------------------------------------------- >> This message is sent to you because you are subscribed to >> the mailing list . >> To unsubscribe, E-mail to: >> archives: http://mail.webdna.us/list/talk@webdna.us >> old archives: http://dev.webdna.us/TalkListArchive/ >> > > > > -- > David Bastedo > Ten Plus One Communications Inc. > http://www.10plus1.com > 416.603.2223 ext.1 > --------------------------------------------------------- > This message is sent to you because you are subscribed to > the mailing list . > To unsubscribe, E-mail to: > archives: http://mail.webdna.us/list/talk@webdna.us > old archives: http://dev.webdna.us/TalkListArchive/ Associated Messages, from the most recent to the oldest:

    
  1. Re: [WebDNA] Not even sure what to ask for help on. . . :( ("David Bastedo" 2008)
  2. Re: [WebDNA] Not even sure what to ask for help on. . . :( ("David Bastedo" 2008)
  3. Re: [WebDNA] Not even sure what to ask for help on. . . :( (christophe.billiottet@webdna.us 2008)
  4. Re: [WebDNA] Not even sure what to ask for help on. . . :( (Kenneth Grome 2008)
  5. Re: [WebDNA] Not even sure what to ask for help on. . . :( ("David Bastedo" 2008)
  6. Re: [WebDNA] Not even sure what to ask for help on. . . :( (Frank Nordberg 2008)
  7. Re: [WebDNA] Not even sure what to ask for help on. . . :( ("David Bastedo" 2008)
  8. Re: [WebDNA] Not even sure what to ask for help on. . . :( (Kenneth Grome 2008)
  9. Re: [WebDNA] Not even sure what to ask for help on. . . :( (Rob 2008)
  10. Re: [WebDNA] Not even sure what to ask for help on. . . :( ("David Bastedo" 2008)
  11. Re: [WebDNA] Not even sure what to ask for help on. . . :( (Rob 2008)
  12. [WebDNA] Not even sure what to ask for help on. . . :( ("David Bastedo" 2008)
Just a question: do you have real bandwidth with a full 6Mbps or do you just have access to a shared network on which you can pump up to 6Mbps? also, did you consider the uploading stream? downloading at 5.5Mbps easily means 500Kbps upstream... chris On Dec 16, 2008, at 4:46, David Bastedo wrote: > Absolutely. So let me explain a bit more. . . > > > 1. Under low load, everything has been working fine. > 2. I have two pages where I expected a slower load time as a result of > the searchs, what I am experiencing though.. . 30 plus seconds to > load the framepage with one search, then 30 seconds each to load two > more Iframe pages. > > There are numerous searches and lookups, but when you go to a load > time like that from a normal load time . . . I was developing on the > same server, so it was under normal load. > > 3. My databases are in ram and they have a few records, but nothing I > would say, exceptional nothing with 100,000's of records. > 4. the last site ran most of the same code, and on peak periods, I > have had to tweak the server to max out the load and connections - but > those were pretty extreme conditions and I basically offloaded images > and a key page. > 5. My last version, I offloaded all the graphics to another server. On > this version, I have offloaded most - menu and footer to another > server and then have integrated a flickr api into pages, so those > images are coming off flicker, but I built a cms, so those images - > thumbs etc. are on the main server and the store. > > 7. For example, loading my store right now, just took several seconds, > and on thursday it was instantaneous. Currently, a category search > with two products is taking several seconds to load and I am sure I > would have noticed that:) > 8. I am looking at my total bandwidth for the day, and it peaked at > 5.5 briefly. It's running just over 3mbs right now > > to update on Ken's note > > the entire site is in Iframes. All big chunks of code are split up > onto individual pages. > I commit to db. only when necessary > My pages all have .html extension > > my fickr api was getting 1.1 qps > > the "key" pages in question cannot be static, the rest I was > planning on doing > > apache is running 2.3 qps right now, but was higher earlier > > now, when you say a permenant db, are you suggesting that I compile > all of my results from the searches and lookup's into one database. . > . . beforehand . . . i did that on the last version for a bunch of > things and found it to be such a pain in the ass. I wrote a routine to > write a new db everytime I updated . . . I am not sure I CAN do that > for parts of the site as one is update constantly > > > A search for two items in the store is taking seconds - and it is in > iframes as well > > I am currently going through and replacing lookip's with searches > where possible, if there is more than 2-3, I think I can combine, I > will > > I cache templates, but have changed the value as per your suggestion > Safe > > as for the safe write - how would this affect the stability of my > databases? as I have had no problems with them in 3+ years of this > site and don't want to start > > I want to think it is bandwidth, as I think that would be an easier > solution for me, but, if I haven't hit that magic number of 6mbs, even > though I have been close, can it be the bandwidth. . . . > > my other "real" issue, is that I haven't annouced the site yet. When I > do, I will be telling a lot of people. . . so my problems get worse > from here. a "slight" improvement under the current load, won't really > satisfy my expectations for the peak . . . as well, this is the > beginning of a year plus endeavour, of which, if all goes right, I'll > be increasing traffic, not decreasing. I am quite confident, that I > can get a tap big enough to satisfy and that my ISP has that > capability > > what I can't figure out, is how to get this site moving fast enough > to keep up. I think I might have to break down the pieces i Have > grouped together on pages, into individual pages. > > Since I Started writing this an hour ago and checking various things, > the traffic has been cut in half, and now looks like this: > > 1.1 requests/sec - 14.2 kB/second - 12.9 kB/request > 149 requests currently being processed, 10 idle workers > WWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW > WWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW > WWWWWWWWWWWWWWWWWW_W___WW______................................. > ................................................................ > ................................................................ > ................................................................ > ................................................................ > .. > > > > and in my experience, on my old site, when it looked liket his I was > still ok and the only thing I did differently, was to use tables to > compile the all the results for my main search instead of writing to a > db, which I used to do exclusively. > > D. > > > > > On Mon, Dec 15, 2008 at 11:16 PM, Rob wrote: >> Ok.. sorry, but I gotta be brutally honest here... its just me, and >> nothing >> personal. I just want to try to understand the problem.... From your >> previous email you said: >> >> "Ok, so I just did a soft launch of a site on Friday and my site >> traffic jumped over 200%. Normally, that would be great, except the >> site has now slowed to a crawl." >> >> Which to me means that under normal conditions, low load, your >> lookups are >> working fine, and everything functions normally. If it was a >> problem in the >> coding, you would also see it in low load conditions as well... >> yes/no?....Lookups/searches, using WebDNA are normally extremely >> fast. Way >> faster then the available bandwidth as they are usually done in >> RAM, unless >> your pulling them via SQL or using an old 386/system 6 processor. >> Pulling >> from RAM means it doesn't have to read from disk, and I wouldn't >> use SQL >> unless I had several thousand records anyway. >> >> IMHO, I still think it's bandwidth. On a 6 mb/sec line you might >> max out, >> and start grinding to a halt at about 23 connections(avg 256kbs per >> connection) pulling streaming all at once. >> >> I currently have to split out loads for the same reason. I park all >> intensive loads on a high bandwidth network, and use our servers, >> on a >> completely separate network, to just serve out the code. It also >> has the >> advantage of differentiating between a coding/bandwidth problem. I >> actually >> purchase space on the backbone for the same reason for about $3.00- >> $4.00 per >> month/site from directnic. >> >> Just my 2 cents.... >> >> Rob >> >> >> On 15-Dec-08, at 7:00 PM, David Bastedo wrote: >> >>> I stream through the same pipe and can handle up to 6mb a second - >>> which I have come close to, but not quite attained. The max I have >>> hit >>> in the last week is 5.5. The a/v is streaming fine - though that >>> server is also maxed out and is being updated and reconfigured. That >>> has been rectified temporarily - there is a memory leak in Flash Com >>> Server, though my connection can handle several hundred streaming >>> conncetions. >>> >>> I did do a test and am spending my night doing more. One culpret is >>> the looksups in that search. Doing a nested search is way faster. I >>> hope to go through all the major chunks and see what I can >>> streamline. >>> >>> I'll post some results in a few hours if that helps of side by side >>> tests, which is pretty well what I need to do. >>> >>> D. >>> >>> On Mon, Dec 15, 2008 at 8:25 PM, Rob wrote: >>>> >>>> Sounds more like a bandwidth problem then a WebDNA problem.... >>>> What kind >>>> of >>>> line is this on? >>>> >>>> Rob >>>> >>>> >>>> On 15-Dec-08, at 2:59 PM, David Bastedo wrote: >>>> >>>>> Ok, so I just did a soft launch of a site on Friday and my site >>>>> traffic jumped over 200%. Normally, that would be great, except >>>>> the >>>>> site has now slowed to a crawl. >>>>> >>>>> I have many images on a seperate server, I have just added 6gb >>>>> to the >>>>> server - emrgency like, hoping it will help - it has - >>>>> marginally - >>>>> and now I am in the process of adding a third server - I also >>>>> have one >>>>> for streaming - and am planning on moving everything to MySQL - I >>>>> think - though it would not be my preference. >>>>> >>>>> Anyway before I can even contemplate that - doing that will take a >>>>> fair bit of time - I need to get the current site as fast as >>>>> possible, >>>>> to buy me some time to do this new update. >>>>> >>>>> I guess my biggest question is on tables. I am using tables on >>>>> this >>>>> site and I think that this may be the biggest issue. I need to >>>>> do a >>>>> lot of sorting and it "seemed" like the best, most convinient >>>>> way to >>>>> do it, though now I am wondering if this has caused way more >>>>> problems >>>>> that it has solved. >>>>> >>>>> Is it better to write to a temp db and then sort those results, >>>>> if I >>>>> have to, rather than a table: >>>>> >>>>> Here is a sample piece of code. (I am making custom music >>>>> playlists BTW) >>>>> >>>>> [table >>>>> >>>>> name=MyPlayListData&fields=PlayListItemID,PlayListID,Sequence,ConcertID,FLV_FileName,UserID,DateCreated,LastUpdate,FLV_Length,PlayListName,PlayListDescription,AlbumID,HSPSongID,PlayListID,PlayListDescription,UserID,PlayListType,DateCreated,timeTotal,MySongName,AlbumName,releaseDate,rating,MyRating][/table] >>>>> >>>>> >>>>> [Search >>>>> >>>>> db=[pagePath]databases/ >>>>> aaa >>>>> .db >>>>> &gePlayListIDdata >>>>> = >>>>> 0 >>>>> &eqAlbumIDdata >>>>> =303&albumIDtype=num&[SO]sort=1&[SO]dir=[SB]&[SO]Type=num] >>>>> [founditems] >>>>> [replace >>>>> >>>>> table >>>>> = >>>>> MyPlayListData >>>>> &eqPlayListIDdatarq >>>>> = >>>>> [PlayListID >>>>> ]&PlayListIDtype >>>>> = >>>>> num >>>>> &eqUserIDdatarq >>>>> =[UserID]&UserIDtype=num&eqHSPSongIDdatarq=[HSPSongID]&append=T] >>>>> [!] >>>>> [/!]PlayListItemID=[PlayListItemID][!] >>>>> [/!]&PlayListID=[PlayListID][!] >>>>> [/!]&Sequence=[Sequence][!] >>>>> [/!]&ConcertID=[ConcertID][!] >>>>> [/!]&FLV_FileName=[FLV_FileName][!] >>>>> [/!]&UserID=[UserID][!] >>>>> [/!]&DateCreated=[DateCreated][!] >>>>> [/!]&LastUpdate=[LastUpdate][!] >>>>> [/!]&FLV_Length=[FLV_Length][!] >>>>> [/!]&PlayListName=[PlayListName][!] >>>>> [/!]&PlayListDescription=[PlayListDescription][!] >>>>> [/!]&AlbumID=[LOOKUP >>>>> >>>>> >>>>> db=[pagePath]databases/ >>>>> yyy >>>>> .db >>>>> &value=[PlayListID]&lookInField=PlayListID&returnField=AlbumID][!] >>>>> [/!]&HSPSongID=[HSPSongID][!] >>>>> [/!]&PlayListName=[LOOKUP >>>>> >>>>> >>>>> db=[pagePath]databases/ >>>>> yyy >>>>> .db >>>>> &value >>>>> =[PlayListID]&lookInField=PlayListID&returnField=PlayListName][!] >>>>> [/!]&PlayListDescription=[LOOKUP >>>>> >>>>> >>>>> db=[pagePath]databases/ >>>>> yyy >>>>> .db >>>>> &value >>>>> = >>>>> [PlayListID >>>>> ]&lookInField=PlayListID&returnField=PlayListDescription][!] >>>>> [/!]&UserID=[LOOKUP >>>>> >>>>> >>>>> db=[pagePath]databases/ >>>>> yyy >>>>> .db&value=[PlayListID]&lookInField=PlayListID&returnField=UserID] >>>>> [!] >>>>> [/!]&PlayListType=[LOOKUP >>>>> >>>>> >>>>> db=[pagePath]databases/ >>>>> yyy >>>>> .db >>>>> &value >>>>> =[PlayListID]&lookInField=PlayListID&returnField=PlayListType][!] >>>>> [/!]&DateCreated=[LOOKUP >>>>> >>>>> >>>>> db=[pagePath]databases/ >>>>> yyy >>>>> .db >>>>> &value >>>>> =[PlayListID]&lookInField=PlayListID&returnField=DateCreated][!] >>>>> [/!]&rating=[LOOKUP >>>>> >>>>> >>>>> db=[pagePath]databases/ >>>>> yyy >>>>> .db >>>>> &value >>>>> = >>>>> [PlayListID >>>>> ]&lookInField=PlayListID&returnField=rating]&MyRating=[search >>>>> >>>>> >>>>> db=[pagePath]databases/ >>>>> xxx.db&eqStoryIDdatarq=[PlayListID]&eqUserIDdatarq=[GETCOOKIE >>>>> name=xxx]][founditems][TheRating][/founditems][/search][/replace] >>>>> >>>>> [/founditems] >>>>> [/search] >>>>> >>>>> -> then I have to do two more seraches. One for the results and >>>>> one >>>>> for next/prev >>>>> >>>>> [search >>>>> >>>>> table >>>>> = >>>>> MyPlayListData >>>>> &gePlayListIDData >>>>> = >>>>> 0 >>>>> &eqalbumIDdatarq >>>>> = >>>>> 303 >>>>> &PlayListIDsumm >>>>> = >>>>> T >>>>> &[SB >>>>> ]sort=1&[SB]sdir=[SO]&[SB]type=[SB_type]&startAt=[startat]&max=10] >>>>> >>>>> >>>>> I know I can make this code more streamlined, but I am not sure >>>>> if it >>>>> is the tables that are a problem. >>>>> >>>>> Without a load, these pages work great, but with the increased >>>>> traffic, it now takes - well WAY too long to load a page. >>>>> Anyway, I am >>>>> going through and make my code thinner, as it were - I can get >>>>> rid of >>>>> a bunch of the lookups above and replace with another search, >>>>> but I am >>>>> wondering if I should replace all the tables in the site with a >>>>> temp >>>>> .db. >>>>> >>>>> Any thoughts? or advice? Thanks in advance. >>>>> >>>>> D. >>>>> >>>>> >>>>> -- >>>>> David Bastedo >>>>> Ten Plus One Communications Inc. >>>>> http://www.10plus1.com >>>>> 416.603.2223 ext.1 >>>>> --------------------------------------------------------- >>>>> This message is sent to you because you are subscribed to >>>>> the mailing list . >>>>> To unsubscribe, E-mail to: >>>>> archives: http://mail.webdna.us/list/talk@webdna.us >>>>> old archives: http://dev.webdna.us/TalkListArchive/ >>>> >>>> --------------------------------------------------------- >>>> This message is sent to you because you are subscribed to >>>> the mailing list . >>>> To unsubscribe, E-mail to: >>>> archives: http://mail.webdna.us/list/talk@webdna.us >>>> old archives: http://dev.webdna.us/TalkListArchive/ >>>> >>> >>> >>> >>> -- >>> David Bastedo >>> Ten Plus One Communications Inc. >>> http://www.10plus1.com >>> 416.603.2223 ext.1 >>> --------------------------------------------------------- >>> This message is sent to you because you are subscribed to >>> the mailing list . >>> To unsubscribe, E-mail to: >>> archives: http://mail.webdna.us/list/talk@webdna.us >>> old archives: http://dev.webdna.us/TalkListArchive/ >> >> --------------------------------------------------------- >> This message is sent to you because you are subscribed to >> the mailing list . >> To unsubscribe, E-mail to: >> archives: http://mail.webdna.us/list/talk@webdna.us >> old archives: http://dev.webdna.us/TalkListArchive/ >> > > > > -- > David Bastedo > Ten Plus One Communications Inc. > http://www.10plus1.com > 416.603.2223 ext.1 > --------------------------------------------------------- > This message is sent to you because you are subscribed to > the mailing list . > To unsubscribe, E-mail to: > archives: http://mail.webdna.us/list/talk@webdna.us > old archives: http://dev.webdna.us/TalkListArchive/ christophe.billiottet@webdna.us

DOWNLOAD WEBDNA NOW!

Top Articles:

Talk List

The WebDNA community talk-list is the best place to get some help: several hundred extremely proficient programmers with an excellent knowledge of WebDNA and an excellent spirit will deliver all the tips and tricks you can imagine...

Related Readings:

XML Syntax, Cookies and Variables.... (2004) Stopping bad HTML propagation ? (1997) [showif] problem (1999) When will this BUG be fixed -- or at least LOOKED AT ... ? (2002) Rollovers (1999) New Image Gallery using WebDNA and ImageMagick (2003) Uploading very large image files (2003) date range (1998) database problems (1999) Was 5.0 Pricing, now Sandbox versus Website and ruminating (2003) [OT] Robust order processing (2003) WebCat2b15MacPlugIn - [authenticate] not [protect] (1997) Just Testing (1997) WebCat2.0 acgi vs plugin (1997) Searchable WebCat (etc.) Docs ? (1997) [SHOWIF] (1997) RE: type 2 errors with ssl server (1997) typhoon... (1997) Projects & Contractors (1997) SETCOOKIE Problems (2003)