Re: [WebDNA] A new popuated field to a DB with 700.000 records
This WebDNA talk-list message is from 2009
It keeps the original formatting.
numero = 104094
interpreted = N
texte = here it is:----start code snippet-------------------------------------------------[!]-------------------------------------------------------------- ** Hot Merge Example by Donovan ** ** Note: This snippet is within a [listfiles..] context that looks in a "hotfolder/" directory and is fired every 10 min. by a trigger. This snippet is also after integrity checks on the uploaded merge file. ** Note: [tNumRecs_Merge] value is from a [numfound] on the merge file after the integrity checks.-------------------------------------------------------------[/!][!] ** Merge 500 records at a time ** [/!][text]tNumLoops=3D[math]floor([tNumRecs_Merge]/500)[/math][/text][!] ** Set some vars ** [/!][text multi=3DT]hmt_index=3D1[!] [/!]&tKeyField=3DMERGE_KEY_FIELD[!] [/!]&tDestDB_Key=3DDEST_KEY_FIELD[!] [/!]&tDestDBPath=3Dyour/db/destination.db[!] [/!]&tNumSecs=3D10[!] ^ Edit the above more or less depending on size of merge. [/!][/text][!] ** need to set some text vars because some WebDNA is not allowed in =spawn ** [/!][text]tFilename=3D[filename][/text][!] ** spawn a new process for each block of 500 ** [/!][loop start=3D1&end=3D[math][tNumLoops]+1[/math]&advance=3D1] [spawn] [search db=3Dhotfolder/[tFilename][!] [/!]&ne[tKeyField]datarq=3Dfind_all[!] [/!]&startat=3D[hmt_index][!] [/!]&max=3D500] [founditems] [replace db=3D[tDestDBPath][!][/!]&eq[tDestDB_Key]datarq=3D[interpret][[tKeyField]][/interpret]][!][/!]&DestFirstField=3D[url][Merge1Field][/url][!][/!]&DestSecondField=3D[url][Merge2Field][/url][!] etc.. [/!][/replace] [/founditems] [/search] [/spawn] [!] ** Wait <[tNumSecs]> seconds to start the next block ** [/!] [waitforfile file=3Dnon_existent_file.txt[!] [/!]&timeout=3D[tNumSecs]][/waitforfile]=09 [!] ** set index to next 500 block ** [/!] [text]hmt_index=3D[math][index]*500[/math][/text][/loop]----end code-------------------------------------------------------The basic idea was that I didn't really care how long themerge took, but rather, I wanted to make sure the processorwasn't overloaded. My idea was to use SPAWN and a waiting technique =using WAITFORFILE to"spread out" the task. This turned out to work really well I think.I used 'top -u apache' to monitor the process on a merge with10,000 records, and I didn't see *any* noticeable heightenedprocessor usage using this code.Just thought I'd pass this experiment along to the list!Donovandisclaimer: :) the above code was snipped out of a live working system,but to make it legible and universal, I rewrote a bit of it aboveso there could be some syntax errors from the rewrite.On Dec 3, 2009, at 2:24, Donovan Brooke wrote:> Palle, I wrote a merge system a while back that I may have posted to =the list. .. Not sure. The system automatically breaks the tasks into =chunks, only you don't have to baby sit it.>=20> Maybe check both archives....>=20> Donovan>=20> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D> d.brooke - mobile> www.euca.us> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D>=20> On Dec 2, 2009, at 3:15 PM, Palle Bo Nielsen =
wrote:>=20>> Dan,>>=20>> I understand where you are getting but it would be preferable to have =the IDs immediately.>>=20>> I will let my questions live for 24 hours or so and then move to plan =B (in chunks) unless the better idea pops up from the list or my own =pillow tonight.>>=20>> Palle>>=20>>=20>>=20>> On 02/12/2009, at 22.06, Dan Strong wrote:>>=20>>>> 1) Yes, but is not sequential.>>>=20>>> Ok, next question: do the new, unique sequential IDs need to be in =there immediately? If not, perhaps add the new uniques as the .db is =otherwise being hit?>>>=20>>> -Dan Strong>>> http://www.DanStrong.com>>>=20>>> Palle Bo Nielsen wrote:>>>> Dan,>>>> 1) Yes, but is not sequential.>>>> 2) Yes, same result - on it's knees.>>>> 3) This is my next plan, but it would be nice to get it done in one =process.>>>> Palle>>>> On 02/12/2009, at 21.56, Dan Strong wrote:>>>>> 1) Is there any other unique identifier in each record?>>>>> 2) Have you tried [replacefounditems]?>>>>> 3) Maybe "chunk it" to only do, say, 500 at a time?>>>>>=20>>>>> -Dan Strong>>>>> http://www.DanStrong.com>>>>>=20>>>>> Palle Bo Nielsen wrote:>>>>>> Hi all,>>>>>> I got a DB which has 11 fields and more that 700.000 lines of =data.>>>>>> Now I need to add a new field and I need to populate this field =with an index from the number 1 to the most recent number which for =example is 700.000.>>>>>> The number 1 is the first lines of data within this BD and the =700.000 is the most recent added line of data.>>>>>> If have done some test and sand-boxing and every time WebDNA goes =into it's knees.>>>>>> I need some good advise from you all to figure out the best =solution.>>>>>> I look forward to your suggestions.>>>>>> **** this one is rough and breaks WebDNA ****>>>>>> [search =db=3Dl4.db&nel4_statusdata=3Ddummy&asl4_serialsort=3D1][numFound][foundIte=ms]>>>>>> [replace =db=3Dl4.db&eql4_skudata=3D[l4_sku]]l4_index=3D[index][/replace]>>>>>> [/foundItems][/search]>>>>>> **** snip ****>>>>>> Palle--------------------------------------------------------->>>>>> This message is sent to you because you are subscribed to>>>>>> the mailing list .>>>>>> To unsubscribe, E-mail to: >>>>>> archives: http://mail.webdna.us/list/talk@webdna.us>>>>>> old archives: http://dev.webdna.us/TalkListArchive/>>>>>> Bug Reporting: =http://forum.webdna.us/eucabb.html?page=3Dtopics&category=3D288>>>>> --------------------------------------------------------->>>>> This message is sent to you because you are subscribed to>>>>> the mailing list .>>>>> To unsubscribe, E-mail to: >>>>> archives: http://mail.webdna.us/list/talk@webdna.us>>>>> old archives: http://dev.webdna.us/TalkListArchive/>>>>> Bug Reporting: =http://forum.webdna.us/eucabb.html?page=3Dtopics&category=3D288>>>>>=20>>>> --------------------------------------------------------->>>> This message is sent to you because you are subscribed to>>>> the mailing list .>>>> To unsubscribe, E-mail to: >>>> archives: http://mail.webdna.us/list/talk@webdna.us>>>> old archives: http://dev.webdna.us/TalkListArchive/>>>> Bug Reporting: =http://forum.webdna.us/eucabb.html?page=3Dtopics&category=3D288>>> --------------------------------------------------------->>> This message is sent to you because you are subscribed to>>> the mailing list .>>> To unsubscribe, E-mail to: >>> archives: http://mail.webdna.us/list/talk@webdna.us>>> old archives: http://dev.webdna.us/TalkListArchive/>>> Bug Reporting: =http://forum.webdna.us/eucabb.html?page=3Dtopics&category=3D288>>>=20>>=20>> --------------------------------------------------------->> This message is sent to you because you are subscribed to>> the mailing list .>> To unsubscribe, E-mail to: >> archives: http://mail.webdna.us/list/talk@webdna.us>> old archives: http://dev.webdna.us/TalkListArchive/>> Bug Reporting: =http://forum.webdna.us/eucabb.html?page=3Dtopics&category=3D288> ---------------------------------------------------------> This message is sent to you because you are subscribed to> the mailing list .> To unsubscribe, E-mail to: > archives: http://mail.webdna.us/list/talk@webdna.us> old archives: http://dev.webdna.us/TalkListArchive/> Bug Reporting: =http://forum.webdna.us/eucabb.html?page=3Dtopics&category=3D288
Associated Messages, from the most recent to the oldest:
here it is:----start code snippet-------------------------------------------------[!]-------------------------------------------------------------- ** Hot Merge Example by Donovan ** ** Note: This snippet is within a [listfiles..] context that looks in a "hotfolder/" directory and is fired every 10 min. by a trigger. This snippet is also after integrity checks on the uploaded merge file. ** Note: [tNumRecs_Merge] value is from a [numfound] on the merge file after the integrity checks.-------------------------------------------------------------[/!][!] ** Merge 500 records at a time ** [/!][text]tNumLoops=3D[math]floor([tNumRecs_Merge]/500)[/math][/text][!] ** Set some vars ** [/!][text multi=3DT]hmt_index=3D1[!] [/!]&tKeyField=3DMERGE_KEY_FIELD[!] [/!]&tDestDB_Key=3DDEST_KEY_FIELD[!] [/!]&tDestDBPath=3Dyour/db/destination.db[!] [/!]&tNumSecs=3D10[!] ^ Edit the above more or less depending on size of merge. [/!][/text][!] ** need to set some text vars because some WebDNA is not allowed in =spawn ** [/!][text]tFilename=3D[filename][/text][!] ** spawn a new process for each block of 500 ** [/!][loop start=3D1&end=3D[math][tNumLoops]+1[/math]&advance=3D1] [spawn] [search db=3Dhotfolder/[tFilename][!] [/!]&ne[tKeyField]datarq=3Dfind_all[!] [/!]&startat=3D[hmt_index][!] [/!]&max=3D500] [founditems] [replace db=3D[tDestDBPath][!][/!]&eq[tDestDB_Key]datarq=3D[interpret][[tKeyField]][/interpret]][!][/!]&DestFirstField=3D[url][Merge1Field][/url][!][/!]&DestSecondField=3D[url][Merge2Field][/url][!] etc.. [/!][/replace] [/founditems] [/search] [/spawn] [!] ** Wait <[tNumSecs]> seconds to start the next block ** [/!] [waitforfile file=3Dnon_existent_file.txt[!] [/!]&timeout=3D[tNumSecs]][/waitforfile]=09 [!] ** set index to next 500 block ** [/!] [text]hmt_index=3D[math][index]*500[/math][/text][/loop]----end code-------------------------------------------------------The basic idea was that I didn't really care how long themerge took, but rather, I wanted to make sure the processorwasn't overloaded. My idea was to use SPAWN and a waiting technique =using WAITFORFILE to"spread out" the task. This turned out to work really well I think.I used 'top -u apache' to monitor the process on a merge with10,000 records, and I didn't see *any* noticeable heightenedprocessor usage using this code.Just thought I'd pass this experiment along to the list!Donovandisclaimer: :) the above code was snipped out of a live working system,but to make it legible and universal, I rewrote a bit of it aboveso there could be some syntax errors from the rewrite.On Dec 3, 2009, at 2:24, Donovan Brooke wrote:> Palle, I wrote a merge system a while back that I may have posted to =the list. .. Not sure. The system automatically breaks the tasks into =chunks, only you don't have to baby sit it.>=20> Maybe check both archives....>=20> Donovan>=20> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D> d.brooke - mobile> www.euca.us> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D>=20> On Dec 2, 2009, at 3:15 PM, Palle Bo Nielsen = wrote:>=20>> Dan,>>=20>> I understand where you are getting but it would be preferable to have =the IDs immediately.>>=20>> I will let my questions live for 24 hours or so and then move to plan =B (in chunks) unless the better idea pops up from the list or my own =pillow tonight.>>=20>> Palle>>=20>>=20>>=20>> On 02/12/2009, at 22.06, Dan Strong wrote:>>=20>>>> 1) Yes, but is not sequential.>>>=20>>> Ok, next question: do the new, unique sequential IDs need to be in =there immediately? If not, perhaps add the new uniques as the .db is =otherwise being hit?>>>=20>>> -Dan Strong>>> http://www.DanStrong.com>>>=20>>> Palle Bo Nielsen wrote:>>>> Dan,>>>> 1) Yes, but is not sequential.>>>> 2) Yes, same result - on it's knees.>>>> 3) This is my next plan, but it would be nice to get it done in one =process.>>>> Palle>>>> On 02/12/2009, at 21.56, Dan Strong wrote:>>>>> 1) Is there any other unique identifier in each record?>>>>> 2) Have you tried [replacefounditems]?>>>>> 3) Maybe "chunk it" to only do, say, 500 at a time?>>>>>=20>>>>> -Dan Strong>>>>> http://www.DanStrong.com>>>>>=20>>>>> Palle Bo Nielsen wrote:>>>>>> Hi all,>>>>>> I got a DB which has 11 fields and more that 700.000 lines of =data.>>>>>> Now I need to add a new field and I need to populate this field =with an index from the number 1 to the most recent number which for =example is 700.000.>>>>>> The number 1 is the first lines of data within this BD and the =700.000 is the most recent added line of data.>>>>>> If have done some test and sand-boxing and every time WebDNA goes =into it's knees.>>>>>> I need some good advise from you all to figure out the best =solution.>>>>>> I look forward to your suggestions.>>>>>> **** this one is rough and breaks WebDNA ****>>>>>> [search =db=3Dl4.db&nel4_statusdata=3Ddummy&asl4_serialsort=3D1][numFound][foundIte=ms]>>>>>> [replace =db=3Dl4.db&eql4_skudata=3D[l4_sku]]l4_index=3D[index][/replace]>>>>>> [/foundItems][/search]>>>>>> **** snip ****>>>>>> Palle--------------------------------------------------------->>>>>> This message is sent to you because you are subscribed to>>>>>> the mailing list .>>>>>> To unsubscribe, E-mail to: >>>>>> archives: http://mail.webdna.us/list/talk@webdna.us>>>>>> old archives: http://dev.webdna.us/TalkListArchive/>>>>>> Bug Reporting: =http://forum.webdna.us/eucabb.html?page=3Dtopics&category=3D288>>>>> --------------------------------------------------------->>>>> This message is sent to you because you are subscribed to>>>>> the mailing list .>>>>> To unsubscribe, E-mail to: >>>>> archives: http://mail.webdna.us/list/talk@webdna.us>>>>> old archives: http://dev.webdna.us/TalkListArchive/>>>>> Bug Reporting: =http://forum.webdna.us/eucabb.html?page=3Dtopics&category=3D288>>>>>=20>>>> --------------------------------------------------------->>>> This message is sent to you because you are subscribed to>>>> the mailing list .>>>> To unsubscribe, E-mail to: >>>> archives: http://mail.webdna.us/list/talk@webdna.us>>>> old archives: http://dev.webdna.us/TalkListArchive/>>>> Bug Reporting: =http://forum.webdna.us/eucabb.html?page=3Dtopics&category=3D288>>> --------------------------------------------------------->>> This message is sent to you because you are subscribed to>>> the mailing list .>>> To unsubscribe, E-mail to: >>> archives: http://mail.webdna.us/list/talk@webdna.us>>> old archives: http://dev.webdna.us/TalkListArchive/>>> Bug Reporting: =http://forum.webdna.us/eucabb.html?page=3Dtopics&category=3D288>>>=20>>=20>> --------------------------------------------------------->> This message is sent to you because you are subscribed to>> the mailing list .>> To unsubscribe, E-mail to: >> archives: http://mail.webdna.us/list/talk@webdna.us>> old archives: http://dev.webdna.us/TalkListArchive/>> Bug Reporting: =http://forum.webdna.us/eucabb.html?page=3Dtopics&category=3D288> ---------------------------------------------------------> This message is sent to you because you are subscribed to> the mailing list .> To unsubscribe, E-mail to: > archives: http://mail.webdna.us/list/talk@webdna.us> old archives: http://dev.webdna.us/TalkListArchive/> Bug Reporting: =http://forum.webdna.us/eucabb.html?page=3Dtopics&category=3D288
christophe.billiottet@webdna.us
DOWNLOAD WEBDNA NOW!
Top Articles:
Talk List
The WebDNA community talk-list is the best place to get some help: several hundred extremely proficient programmers with an excellent knowledge of WebDNA and an excellent spirit will deliver all the tips and tricks you can imagine...
Related Readings:
[WebDNA] my spawned [purchase] seems to be losing the orderfile. ... [MOVEFILE] workaround? (2009)
Price Not Appearing (2000)
Q: how long for answers to the WebDNA-Talk list? (1997)
Error Lob.db records error message not name (1997)
Loss in Form (1998)
Crashes and prior posting (2006)
Creating a back button (1999)
Typo in the Online Docs ... (1997)
MacOS X upgrade pricing plan (1999)
Problem passing variables - IE vs Netscape (2000)
Summing fields (1997)
RE: Writefile outside WebSTAR hierarchy? (1997)
[WebDNA] Problem with Sendmail (2015)
4.5.1 Upgrade... (2003)
Setting up shop (1997)
difference between v.6 and v.7, WAS: [WebDNA] v7 thisurl has different behavour (2012)
[sendmail] and [formvariables] (1997)
math a various prices (1997)
Reversed words (1997)
Server Freeze (1998)