1985 All American 2175 Posts user info edit post |
^sql server - datepart(week,DATE) 1/5/2011 1:14:05 AM |
Stein All American 19842 Posts user info edit post |
When you run that command on the date 2011-01-01 what does it return? 1/5/2011 8:50:47 AM |
afripino All American 11433 Posts user info edit post |
should be an integer = 1 1/5/2011 12:04:27 PM |
Stein All American 19842 Posts user info edit post |
See, you would think so, but in MySQL it's actually 52. That's why I ask -- I'm wondering how SQL Server handles it. 1/5/2011 2:04:46 PM |
afripino All American 11433 Posts user info edit post |
it's returning the calendar week of the year, similar to the MySQL WEEKOFYEAR() function. to my knowledge (which is very little for MySQL), there isn't a DATEPART() function by that name. 1/5/2011 3:11:54 PM |
Stein All American 19842 Posts user info edit post |
What I'm saying is that SELECT WEEKOFYEAR('2011-01-01'); in MySQL returns 52.
Because it's not the first week of this year, it's the last week of last year.
SELECT WEEKOFYEAR('2011-01-01'); = 52 SELECT WEEKOFYEAR('2011-01-02'); = 52 SELECT WEEKOFYEAR('2011-01-03'); = 1 1/5/2011 3:19:37 PM |
afripino All American 11433 Posts user info edit post |
Looks like MySQL is using the ISO-8601 standard as described here: http://www.epochconverter.com/epoch/weeknumbers.php
MSSQL, on the other hand, is assuming day 1 of week 1 would be January 1st. Here's a MSSQL function to convert to the ISO-8601 standard:
drop function dbo.F_ISO_WEEK_OF_YEAR go create function dbo.F_ISO_WEEK_OF_YEAR ( @Date datetime ) returns int as /* Function F_ISO_WEEK_OF_YEAR returns the ISO 8601 week of the year for the date passed. */ begin
declare @WeekOfYear int
select -- Compute week of year as (days since start of year/7)+1 -- Division by 7 gives whole weeks since start of year. -- Adding 1 starts week number at 1, instead of zero. @WeekOfYear = (datediff(dd, -- Case finds start of year case when NextYrStart <= @date then NextYrStart when CurrYrStart <= @date then CurrYrStart else PriorYrStart end,@date)/7)+1 from ( select -- First day of first week of prior year PriorYrStart = dateadd(dd,(datediff(dd,-53690,dateadd(yy,-1,aa.Jan4))/7)*7,-53690), -- First day of first week of current year CurrYrStart = dateadd(dd,(datediff(dd,-53690,aa.Jan4)/7)*7,-53690), -- First day of first week of next year NextYrStart = dateadd(dd,(datediff(dd,-53690,dateadd(yy,1,aa.Jan4))/7)*7,-53690) from ( select --Find Jan 4 for the year of the input date Jan4 = dateadd(dd,3,dateadd(yy,datediff(yy,0,@date),0)) ) aa ) a
return @WeekOfYear
end go
[Edited on January 5, 2011 at 3:49 PM. Reason : ] 1/5/2011 3:49:05 PM |
1985 All American 2175 Posts user info edit post |
^ yeah. A side effect is MSSQL gives the last days of the year as week 53 1/5/2011 4:59:02 PM |
quagmire02 All American 44225 Posts user info edit post |
i was not aware that mysql wouldn't like this ("you can't specify target table 'table' for update in FROM clause"):
UPDATE table SET fapfap=1 WHERE id=(SELECT id FROM table WHERE lol='wut' ORDER BY date DESC LIMIT 1,1) so how would i do this?
[Edited on February 9, 2011 at 9:22 PM. Reason : .]2/9/2011 9:21:56 PM |
msb2ncsu All American 14033 Posts user info edit post |
Works if you select from an aliased subquery:
UPDATE mantis_user_table SET realname='changed' WHERE id = ( SELECT id FROM (SELECT * FROM mantis_user_table WHERE username='test') as qwerty)
Ugly as hell but performs the update. 2/10/2011 2:34:41 AM |
quagmire02 All American 44225 Posts user info edit post |
^ worked perfectly...thanks!
and yes, it's pretty damn ugly 2/10/2011 7:53:37 AM |
stowaway All American 11770 Posts user info edit post |
anyone done any paypal IPN handler integration? I can't get this crap to transfer the IPN variables to the database at all. It's not spitting out any errors, and the IPN part seems to be handled properly when doing a test transaction in sandbox, but it isn't working right. I've gone through the tutorial a dozen times. I've copy and pasted the code. I've gone so far as to removing any changes in values/names I may want and it still isn't sticking the stuff in the db. The tables have been set up both manually, uploading .sql files, and doing it through sql query panel. None of this has made a difference.
<?php /* * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. author: codex-m website: http://www.php-developer.org license: open source */ //Read my Paypal IPN tutorial series in PHP here:http://www.devshed.com/cp/bio/Codex%98M/ $req = 'cmd=_notify-validate'; foreach ($_POST as $key => $value) { $value = urlencode(stripslashes($value)); $req .= "&$key=$value"; } $header .= "POST /cgi-bin/webscr HTTP/1.0\r\n"; $header .= "Content-Type: application/x-www-form-urlencoded\r\n"; $header .= "Content-Length: " . strlen($req) . "\r\n\r\n"; $fp = fsockopen('ssl://www.sandbox.paypal.com', 443, $errno, $errstr, 30); $payment_status = $_POST['payment_status']; $payment_amount = $_POST['mc_gross']; $payment_currency = $_POST['mc_currency']; $txn_id = $_POST['txn_id']; $receiver_email = $_POST['receiver_email']; $payer_email = $_POST['payer_email']; $invoice = $_POST['invoice']; $customeripaddress=$_POST['custom']; $productname=$_POST['item_name']; include 'this_is_correct, database connect'; if (!$fp) { $log='http error='.$errno; $log = mysql_real_escape_string($log); mysql_query("INSERT INTO ipnlogs (eventlog) VALUES ('$log')"); } else { fputs ($fp, $header . $req); while (!feof($fp)) { $res = fgets ($fp, 1024); if (strcmp ($res, "VERIFIED") == 0) { $log='Verified IPN Transaction'; $log = mysql_real_escape_string($log); mysql_query("INSERT INTO ipnlogs (eventlog) VALUES ('$log')"); $txn_id = mysql_real_escape_string($txn_id); if (!($fetch = mysql_fetch_array( mysql_query("SELECT `TransactionID` FROM `customerrecords` WHERE `TransactionID`='$txn_id'")))) { if ($receiver_email=='this_is_correct') { $receiver_email = mysql_real_escape_string($receiver_email); } else { die('ERROR: Invalid Paypal Seller Email address.'); } if ($payment_currency=='USD') { $payment_currency = mysql_real_escape_string($payment_currency); } else { die('ERROR: Incorrect currency'); } $productname = mysql_real_escape_string($productname); $result = mysql_query("SELECT `ProductPrice` FROM `productstable` WHERE `ProductName`='$productname'") or die(mysql_error()); $row = mysql_fetch_array($result) or die("Invalid query: " . mysql_error()); $productprice = $row['ProductPrice']; if ($payment_amount==$productprice) { $payment_amount = mysql_real_escape_string($payment_amount); } else { die('ERROR: Incorrect payment amount'); } if ($payment_status=='Completed') { $payment_status = mysql_real_escape_string($payment_status); } else { die('ERROR: Payment status not completed'); } require_once('is_email.php'); if (is_email($payer_email)) { $payer_email = mysql_real_escape_string($payer_email); } else { die('ERROR: Invalid payer email address'); } if (ctype_alnum($invoice)){ $invoice = mysql_real_escape_string($invoice); } else { die('ERROR: The submitted invoice data is NOT a NUMBER'); } if(filter_var($customeripaddress, FILTER_VALIDATE_IP)){ $customeripaddress = mysql_real_escape_string($customeripaddress); } else { die('ERROR: The submitted IP address data is NOT valid.'); } $downloadstatus='incomplete'; $downloadstatus = mysql_real_escape_string($downloadstatus); mysql_query("INSERT INTO customerrecords (PaymentStatus,PaymentAmount,PaymentCurrency,PayerEmail,ReceiverEmail,TransactionID,InvoiceNumber,ProductPurchased,IPAddress,DownloadStatus) VALUES ('$payment_status','$payment_amount','$payment_currency','$payer_email','$receiver_email','$txn_id','$invoice','$productname','$customeripaddress','$downloadstatus')") or die(mysql_error()); mysql_close($dbhandle); } else { die('Could not process request-transaction ID already exist'); } } else if (strcmp ($res, "INVALID") == 0) { $log='Invalid IPN transaction'; $log = mysql_real_escape_string($log); mysql_query("INSERT INTO ipnlogs (eventlog) VALUES ('$log')"); } } fclose ($fp); } ?> 2/11/2011 3:20:50 PM |
BigMan157 no u 103354 Posts user info edit post |
Quote : | "include 'this_is_correct, database connect';" |
you sure you even have a mysql connection open?2/11/2011 4:40:49 PM |
stowaway All American 11770 Posts user info edit post |
well, it is pointing to the absolute server path of the connect.php file. I've double-checked that part. I'm going to say yes because if I run the IPN test pointing at this file in Sandbox it does insert a verified transaction entry in the ipnlog table. If I run a regular transaction from the index.php page going through the regular paypal checkout system it doesn't. In either case it does not put anything in the customerrecords table.
The customerdownload.php file returns the error "ERROR: The invoice number data is not valid." when submitted, which means it is connecting to the database, looking for an entry, and finding out there isn't an entry.
[Edited on February 11, 2011 at 5:49 PM. Reason : jk] 2/11/2011 5:44:27 PM |
stowaway All American 11770 Posts user info edit post |
any other ideas before I start from scratch? 2/15/2011 1:33:35 PM |
stowaway All American 11770 Posts user info edit post |
ok, no wonder nobody could find anything wrong. It was all on paypal's side of things.
[Edited on February 15, 2011 at 5:49 PM. Reason : afd] 2/15/2011 5:44:04 PM |
CaelNCSU All American 7132 Posts user info edit post |
This would help a lot of you when you paste code examples:
http://pastebin.com/ 2/15/2011 11:22:18 PM |
Shadowrunner All American 18332 Posts user info edit post |
OK, I'm modeling the probability of failure for hurricane protection systems in Louisiana, and I have a big database of levees and floodwalls and whatnot. For some lengths along levees, we don't have complete survey data, so we want to estimate the probability at those points as being equal to the probability of failure for the closest points at which we do have data. To whit:
Each levee segment is coded with several IDs, a type, and a location. The primary key ID for each object is called swp_id; reach_id and bhu_id identify the levee and the region that the levee is protecting; type indicates whether the segment is a levee, a floodgate, a pump station, etc; and location indicates how many linear feet along the levee the midpoint of that segment is located.
So what I need is a query that, for every swp_id, selects the swp_id of the object closest to that point which 1) has the same reach_id and bhu_id, and 2) is of a certain type (say type <> 1 just as an example). "Closest" is defined by minimizing the difference between the Location value of point A and point B for points A and B having the same reach_id and bhu_id.
This is in postgreSQL, so I've got support for subselects and things like that. I also have lat/lon coordinates if that's easier to work with than the location values. I also have the data geocoded and could probably do it that way, but I'm not sure how to do the VBA scripting in ArcMap to restrict the geospatial query to only look at points with the same reach_id and bhu_id.
Example:
swp_id reach_id bhu_id type location 1 1 1 2 0 2 1 1 1 2 3 1 1 2 3 4 1 1 2 5 5 1 2 2 0 6 1 2 1 1 7 1 2 2 3
The query would return
swp_id closest_id 1 3 2 3 3 4 4 3 5 7 6 5 7 5
Assume that every reach_id/bhu_id combination has more than one point of type <> 1, so that there is a valid closest point. I don't think I would assume that the "closest" point is unique, as there are likely cases where the points are evenly spaced so the closest points on each side are the same distance away; in that case, it doesn't matter which one gets selected, but I only need one of them.3/23/2011 3:05:24 PM |
Stein All American 19842 Posts user info edit post |
Quote : | "the probability of failure for hurricane protection systems in Louisiana" |
100%
----------------------
In seriousness, let me make sure I understand this... closest_id just another swp_id, right? And, in the case of swp_id=1 you're determining that swp_id=3 is the closest one because type<>1 and the 0 is closer to 3 than 5, right?3/23/2011 4:26:55 PM |
Shadowrunner All American 18332 Posts user info edit post |
Right. 3/23/2011 4:38:51 PM |
Stein All American 19842 Posts user info edit post |
This is the MySQL for it, should be pretty close to what the PostrgeSQL is:
SELECT t.swp_id, (SELECT n.swp_id FROM data_table n WHERE t.swp_id<>n.swp_id AND t.reach_id=n.reach_id AND t.bhu_id=n.bhu_id AND n.type<>1 ORDER BY ABS(t.location - n.location) LIMIT 1) closest_id FROM data_table t
[Edited on March 23, 2011 at 5:05 PM. Reason : .]3/23/2011 5:05:09 PM |
Shadowrunner All American 18332 Posts user info edit post |
Oh sweet baby Jesus, for some reason I didn't think of using the ABS(t.location - n.location) in an ORDER BY clause. I was getting stymied by trying to put it in a WHERE clause with it equal to the MIN of that difference, and postgre complaining about having an aggregate function in the WHERE clause. That really simplifies things, I think I should be able to adapt that.
Big ups, thanks a lot. 3/23/2011 5:20:44 PM |
spöokyjon ℵ 18617 Posts user info edit post |
This is more schema related than query related, but if I change the text encoding of a field in a database will it fuck up any of the existing data? If you look at the home page on phpMyAdmin, it says the charset is utf8, but any special characters get converted to those stupid question mark characters. The collation for every text field is latin1_swedish_ci (Swedish? what?? why?). If I change the collation, will that allow special characters to be stored properly? Will it do anything to the text already in the database? I didn't set this up, but when I did something similar recently on my own I just did everything as utf8 and didn't have any problems with special characters.] 3/24/2011 11:30:47 AM |
BigMan157 no u 103354 Posts user info edit post |
as a stopgap, if you bust out a SET names UTF8 query before any of your other queries it'll fix that
also http://php.net/manual/en/function.mysql-set-charset.php if your version is up-to-date enough 3/24/2011 6:27:40 PM |
1985 All American 2175 Posts user info edit post |
I don’t know if this is possible (I know it’s probably not proper). But is there a way to dynamically join a table to itself an arbitrary number of times.
For example, Table ‘Transactions’ contains two relevant fields - ’SKU_out’ and ’SKU_in’
This table might keep track of library inventory leaving and returning. When it’s in inventory, it has a unique SKU, when it is checked out, that SKU gets inserted into the ‘SKU_out’ field. When the item comes back, it gets assigned a new SKU and is logged in the ‘SKU_in’ field.
That new SKU might go out again and would come back as some new SKU.
What I would like to be able to do is say ‘how many items have had x number of transactions’ for any arbitrary number x. 4/13/2011 4:15:20 PM |
aaronburro Sup, B 53137 Posts user info edit post |
that just seems like a bad design to begin with... 4/13/2011 7:09:24 PM |
BigMan157 no u 103354 Posts user info edit post |
can you not just do a group by and then a where statement on a count()?
i'm not totally following you so... 4/13/2011 8:38:42 PM |
1985 All American 2175 Posts user info edit post |
^^ yeah, its bad design
^ No, i can't, since the SKU changes every time it goes out. Im trying track how many times a physical object left and came back. The problem is it gets a new label each time it comes in. 4/13/2011 10:11:33 PM |
FroshKiller All American 51913 Posts user info edit post |
can we get quagmire02 in here to open his dick trap about you guys criticizing the design for a page 4/14/2011 8:34:53 AM |
scrager All American 9481 Posts user info edit post |
^^ can you do a data update and assign an object id to all of the sku ins and outs? then you can do a count or grouping on object id? IE: the object id stays the same even though the object gets a new label every time it comes in. 4/15/2011 2:22:56 PM |
1985 All American 2175 Posts user info edit post |
No, I can't update anything.
I guess I could transfer everything to our servers and rearrange the data. 4/20/2011 2:18:26 PM |
spöokyjon ℵ 18617 Posts user info edit post |
Storing images in SQL as blobs as opposed to on the file system is pretty much an objectively bad idea, right? 4/29/2011 12:07:02 PM |
quagmire02 All American 44225 Posts user info edit post |
Quote : | "can we get quagmire02 in here to open his dick trap about you guys criticizing the design for a page" |
hah, i just saw this...i do really like how you THINK you have a lot to contribute, but never actually manage to to share your expansive knowledge with anyone4/29/2011 12:43:06 PM |
FroshKiller All American 51913 Posts user info edit post |
I wouldn't say so. Depends on your implementation. If I had some relatively light binaries that I wanted associated with records in the database (like badges identifying a usergroup or some shit), I might be better off using BLOBs just for the sake of being able to secure them with the database's security and distribute them with the data innately, e.g. just distribute a back-up rather than an archive with the back-up and a folder of files. 4/29/2011 12:47:59 PM |
Stein All American 19842 Posts user info edit post |
Quote : | "Storing images in SQL as blobs as opposed to on the file system is pretty much an objectively bad idea, right?" |
Outside of some very, very, very specific purposes, it's a bad idea. It's not what the database is for and it's not how it should be used.
[Edited on April 29, 2011 at 3:04 PM. Reason : .]4/29/2011 3:04:19 PM |
qntmfred retired 40807 Posts user info edit post |
i'm just gonna use this thread to gripe for a minute
data architect at work gave us a table with a compound key of five GUIDs
i swear to god he's just trolling us 5/6/2011 4:41:47 PM |
FroshKiller All American 51913 Posts user info edit post |
maybe he just wanted to ensure the values were also unique across four alternate earths 5/9/2011 1:20:10 PM |
EuroTitToss All American 4790 Posts user info edit post |
^,^^ very effective in http://en.wikipedia.org/wiki/Anathem 5/9/2011 2:40:25 PM |
spöokyjon ℵ 18617 Posts user info edit post |
Okay, so I have ~5000 customer records that are entered, updated, etc. on an internal web site (php/mysql). A small portion of this data (each customer's name, account number, and account expiration date) is transferred to a local MS SQL server periodically via a C# app run locally (our POS machines run off of the local MS SQL database, so we need to do this so that when a customer's membership card is swiped, the POS can look up current account information on the local server).
If all of this was on the same system, or if the web site had a way of updating the local database in a way I were comfortable with (i.e. I'd rather not open it up to the outside world), I would simply update each record as needed on both databases at the same time. Since that's not really an option here, I'm left with batching the data.
What I currently do is simply delete all records from the table on the local server and then insert everything back from the web site. Is that bad? Or, more specifically:
Mass deletion and insertion to update batches of data is: 1) the worst idea ever. 2) not the best way to do it, but it won't hurt anybody. 3) okay.
Which is the case? Does any of this even matter on such a small number of records? Would it make a difference if I did it daily?
tl;dr is deleting and reinserting a 5000 record table daily a bad idea? PLEASE I HAVE A MATCH TONIGHT 8/20/2011 1:57:25 PM |
Novicane All American 15416 Posts user info edit post |
I don't see a problem with batching if its just a few times a week deal where you do it on your own time. Just not sure why your not comfortable is opening up the database. My old database had 14k+ warranty entries for our products, and 20k+ order entries. I would back them up monthly and copy them over to a test server. It took all of like 2 minutes to loginto phpmyadmin and do it. Once everything was backed up to a txt file, with everything, it was still only like 3 MB. Which is crazy to think about it because its years and years of data and order entry.
If you were doing it every 5 minutes or every entry then we'd have some problems.
But its just 5,000 records, to us that seems like a lot but in SQL terms that literally nothing at all to handle.
[Edited on August 21, 2011 at 3:23 PM. Reason : d] 8/21/2011 3:20:05 PM |
joepeshi All American 8094 Posts user info edit post |
I may be starting a job where I would be a complete novice with this...please bear w/ me! 2/1/2013 1:36:53 AM |
Novicane All American 15416 Posts user info edit post |
ah memories.
I no longer work with databases and I miss it. 2/1/2013 7:10:42 AM |
1985 All American 2175 Posts user info edit post |
What is the best way to delete a large amount of records from an unindexed table that contains ~ 3 billion rows?
The table cannot be indexed because of size constraints on the DB. I've tried running:
DECLARE @stillgoing bit; SET @stillgoing = 1;
WHILE @stillgoing = 1 BEGIN DELETE TOP (10000) DataWarehouse WHERE fieldname = 'foo' IF @@ROWCOUNT = 0 SET @stillgoing = 0;
CHECKPOINT END
But that looks like it will need to run for weeks to remove all the rows that I need to remove. 12/20/2013 1:16:37 PM |
skywalkr All American 6788 Posts user info edit post |
Any of you guys have recommended resources for SQL for reporting/basic analytics? We use Toad at work and my background is in SAS so I know basic SQL from proc sql but there are a lot of things I want to do and could easily do them in SAS but I have to hunt to find the SQL equivalent. I have a feeling there are a ton of things I could do but I just don't know about them and most of the books I find are more for database analysts and the like. 12/20/2013 1:21:29 PM |
lewisje All American 9196 Posts user info edit post |
^^well that's what you get for not indexing (sry you couldn't index it)
too bad you can't just DROP TABLE and start over 12/20/2013 7:05:43 PM |
Novicane All American 15416 Posts user info edit post |
^^ never used toad. I've used SQL yog 12/20/2013 8:21:54 PM |
skywalkr All American 6788 Posts user info edit post |
It isn't really anything special in terms of syntax, pl sql (oracle). I don't really know the pl part though, I am sure there is plenty there that I could take advantage of but the stuff I have read is a bit more techy than I would like. 12/20/2013 8:33:08 PM |
FroshKiller All American 51913 Posts user info edit post |
1985 said:
Quote : | "What is the best way to delete a large amount of records from an unindexed table that contains ~ 3 billion rows?
[snip]
The table cannot be indexed because of size constraints on the DB.
But that looks like it will need to run for weeks to remove all the rows that I need to remove" |
Well, you ought to fuckin' index it. Add storage, create a new filegroup, create indexes on the new filegroup. Come on, now.
How many records do you need to keep? Could you truncate the table, create your indexes, then do a bulk insert or something from a backup of just the records you wanted to keep?
What version of SQL Server is it?
Where should I mail my invoice?12/20/2013 10:33:16 PM |
afripino All American 11433 Posts user info edit post |
^agreed.
[Edited on December 21, 2013 at 8:10 AM. Reason : ] 12/21/2013 8:07:59 AM |
Stein All American 19842 Posts user info edit post |
1) BCP out 2) Write a script to remove what you want 3) BCP in 12/21/2013 1:41:07 PM |