Friday, March 30, 2012

Optimizing table with more than 54 million records

I have a table that has more than 54 million records and I'm searching
records using the LIKE statement. I looking for ways to
optimize/partition/etc. this table.
This is the table structure:
TABLE "SEARCHCACHE"
Fields:
- searchType int
- searchField int
- value varchar(500)
- externalKey int
For example, a simple search would be:
*******
SELECT TOP 100 *
FROM SearchCache
WHERE (searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%' OR
value LIKE 'name2%')
*******
This works just fine, it retrieves records in 2-3 seconds. The problem
is when I combine more predicates. For example:
*******
SELECT TOP 100 *
FROM SearchCache
WHERE ((searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%'
OR value LIKE 'name2%'))
OR ((searchType = 2) AND (searchField = 3) AND (value LIKE
'anothervalue1%' OR value LIKE 'anothervalue2%'))
*******
this may take up to 30-40 seconds!
Any suggestion?
Thanks in advanced.
On Jul 30, 1:23 pm, Gaspar <gas...@.no-reply.com> wrote:
> I have a table that has more than 54 million records and I'm searching
> records using the LIKE statement. I looking for ways to
> optimize/partition/etc. this table.
> This is the table structure:
> TABLE "SEARCHCACHE"
> Fields:
> - searchType int
> - searchField int
> - value varchar(500)
> - externalKey int
> For example, a simple search would be:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE (searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%' OR
> value LIKE 'name2%')
> *******
> This works just fine, it retrieves records in 2-3 seconds. The problem
> is when I combine more predicates. For example:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE ((searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%'
> OR value LIKE 'name2%'))
> OR ((searchType = 2) AND (searchField = 3) AND (value LIKE
> 'anothervalue1%' OR value LIKE 'anothervalue2%'))
> *******
> this may take up to 30-40 seconds!
> Any suggestion?
> Thanks in advanced.
maybe you could consider full-text search on the table? this would
require afair rewriting queries though.
|||The original solution used FullText but it didn't work for my scenario.
That's why I implemented this SearchCache table.
Thanks.
Piotr Rodak wrote:
> On Jul 30, 1:23 pm, Gaspar <gas...@.no-reply.com> wrote:
> maybe you could consider full-text search on the table? this would
> require afair rewriting queries though.
>
|||Gaspar
1) Don't use TOP 100 command especially without ORDER BY clause
2) Please provide some details from an execution plan
"Gaspar" <gaspar@.no-reply.com> wrote in message
news:e1xYASq0HHA.4824@.TK2MSFTNGP02.phx.gbl...
>I have a table that has more than 54 million records and I'm searching
> records using the LIKE statement. I looking for ways to
> optimize/partition/etc. this table.
> This is the table structure:
> TABLE "SEARCHCACHE"
> Fields:
> - searchType int
> - searchField int
> - value varchar(500)
> - externalKey int
> For example, a simple search would be:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE (searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%' OR
> value LIKE 'name2%')
> *******
> This works just fine, it retrieves records in 2-3 seconds. The problem
> is when I combine more predicates. For example:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE ((searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%'
> OR value LIKE 'name2%'))
> OR ((searchType = 2) AND (searchField = 3) AND (value LIKE
> 'anothervalue1%' OR value LIKE 'anothervalue2%'))
> *******
> this may take up to 30-40 seconds!
> Any suggestion?
> Thanks in advanced.
|||Gaspar,
Try unioning the result from independent statements.
select top 100 *
from
(
SELECT TOP 100 *
FROM SearchCache
WHERE ((searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%'
OR value LIKE 'name2%'))
union
SELECT TOP 100 *
FROM SearchCache
where ((searchType = 2) AND (searchField = 3) AND (value LIKE
'anothervalue1%' OR value LIKE 'anothervalue2%'))
) as t
AMB
"Gaspar" wrote:

> I have a table that has more than 54 million records and I'm searching
> records using the LIKE statement. I looking for ways to
> optimize/partition/etc. this table.
> This is the table structure:
> TABLE "SEARCHCACHE"
> Fields:
> - searchType int
> - searchField int
> - value varchar(500)
> - externalKey int
> For example, a simple search would be:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE (searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%' OR
> value LIKE 'name2%')
> *******
> This works just fine, it retrieves records in 2-3 seconds. The problem
> is when I combine more predicates. For example:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE ((searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%'
> OR value LIKE 'name2%'))
> OR ((searchType = 2) AND (searchField = 3) AND (value LIKE
> 'anothervalue1%' OR value LIKE 'anothervalue2%'))
> *******
> this may take up to 30-40 seconds!
> Any suggestion?
> Thanks in advanced.
>
|||You may want to try running the profiler and check suggestions under the
tuning wizard for the tables - it could be a lack of a good index.
Regards,
Jamie
"Gaspar" wrote:

> I have a table that has more than 54 million records and I'm searching
> records using the LIKE statement. I looking for ways to
> optimize/partition/etc. this table.
> This is the table structure:
> TABLE "SEARCHCACHE"
> Fields:
> - searchType int
> - searchField int
> - value varchar(500)
> - externalKey int
> For example, a simple search would be:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE (searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%' OR
> value LIKE 'name2%')
> *******
> This works just fine, it retrieves records in 2-3 seconds. The problem
> is when I combine more predicates. For example:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE ((searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%'
> OR value LIKE 'name2%'))
> OR ((searchType = 2) AND (searchField = 3) AND (value LIKE
> 'anothervalue1%' OR value LIKE 'anothervalue2%'))
> *******
> this may take up to 30-40 seconds!
> Any suggestion?
> Thanks in advanced.
>

Optimizing table with more than 54 million records

I have a table that has more than 54 million records and I'm searching
records using the LIKE statement. I looking for ways to
optimize/partition/etc. this table.
This is the table structure:
TABLE "SEARCHCACHE"
Fields:
- searchType int
- searchField int
- value varchar(500)
- externalKey int
For example, a simple search would be:
*******
SELECT TOP 100 *
FROM SearchCache
WHERE (searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%' OR
value LIKE 'name2%')
*******
This works just fine, it retrieves records in 2-3 seconds. The problem
is when I combine more predicates. For example:
*******
SELECT TOP 100 *
FROM SearchCache
WHERE ((searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%'
OR value LIKE 'name2%'))
OR ((searchType = 2) AND (searchField = 3) AND (value LIKE
'anothervalue1%' OR value LIKE 'anothervalue2%'))
*******
this may take up to 30-40 seconds!
Any suggestion?
Thanks in advanced.On Jul 30, 1:23 pm, Gaspar <gas...@.no-reply.com> wrote:
> I have a table that has more than 54 million records and I'm searching
> records using the LIKE statement. I looking for ways to
> optimize/partition/etc. this table.
> This is the table structure:
> TABLE "SEARCHCACHE"
> Fields:
> - searchType int
> - searchField int
> - value varchar(500)
> - externalKey int
> For example, a simple search would be:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE (searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%' OR
> value LIKE 'name2%')
> *******
> This works just fine, it retrieves records in 2-3 seconds. The problem
> is when I combine more predicates. For example:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE ((searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%'
> OR value LIKE 'name2%'))
> OR ((searchType = 2) AND (searchField = 3) AND (value LIKE
> 'anothervalue1%' OR value LIKE 'anothervalue2%'))
> *******
> this may take up to 30-40 seconds!
> Any suggestion?
> Thanks in advanced.
maybe you could consider full-text search on the table? this would
require afair rewriting queries though.|||The original solution used FullText but it didn't work for my scenario.
That's why I implemented this SearchCache table.
Thanks.
Piotr Rodak wrote:
> On Jul 30, 1:23 pm, Gaspar <gas...@.no-reply.com> wrote:
>> I have a table that has more than 54 million records and I'm searching
>> records using the LIKE statement. I looking for ways to
>> optimize/partition/etc. this table.
>> This is the table structure:
>> TABLE "SEARCHCACHE"
>> Fields:
>> - searchType int
>> - searchField int
>> - value varchar(500)
>> - externalKey int
>> For example, a simple search would be:
>> *******
>> SELECT TOP 100 *
>> FROM SearchCache
>> WHERE (searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%' OR
>> value LIKE 'name2%')
>> *******
>> This works just fine, it retrieves records in 2-3 seconds. The problem
>> is when I combine more predicates. For example:
>> *******
>> SELECT TOP 100 *
>> FROM SearchCache
>> WHERE ((searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%'
>> OR value LIKE 'name2%'))
>> OR ((searchType = 2) AND (searchField = 3) AND (value LIKE
>> 'anothervalue1%' OR value LIKE 'anothervalue2%'))
>> *******
>> this may take up to 30-40 seconds!
>> Any suggestion?
>> Thanks in advanced.
> maybe you could consider full-text search on the table? this would
> require afair rewriting queries though.
>|||Gaspar
1) Don't use TOP 100 command especially without ORDER BY clause
2) Please provide some details from an execution plan
"Gaspar" <gaspar@.no-reply.com> wrote in message
news:e1xYASq0HHA.4824@.TK2MSFTNGP02.phx.gbl...
>I have a table that has more than 54 million records and I'm searching
> records using the LIKE statement. I looking for ways to
> optimize/partition/etc. this table.
> This is the table structure:
> TABLE "SEARCHCACHE"
> Fields:
> - searchType int
> - searchField int
> - value varchar(500)
> - externalKey int
> For example, a simple search would be:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE (searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%' OR
> value LIKE 'name2%')
> *******
> This works just fine, it retrieves records in 2-3 seconds. The problem
> is when I combine more predicates. For example:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE ((searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%'
> OR value LIKE 'name2%'))
> OR ((searchType = 2) AND (searchField = 3) AND (value LIKE
> 'anothervalue1%' OR value LIKE 'anothervalue2%'))
> *******
> this may take up to 30-40 seconds!
> Any suggestion?
> Thanks in advanced.|||Gaspar,
Try unioning the result from independent statements.
select top 100 *
from
(
SELECT TOP 100 *
FROM SearchCache
WHERE ((searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%'
OR value LIKE 'name2%'))
union
SELECT TOP 100 *
FROM SearchCache
where ((searchType = 2) AND (searchField = 3) AND (value LIKE
'anothervalue1%' OR value LIKE 'anothervalue2%'))
) as t
AMB
"Gaspar" wrote:
> I have a table that has more than 54 million records and I'm searching
> records using the LIKE statement. I looking for ways to
> optimize/partition/etc. this table.
> This is the table structure:
> TABLE "SEARCHCACHE"
> Fields:
> - searchType int
> - searchField int
> - value varchar(500)
> - externalKey int
> For example, a simple search would be:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE (searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%' OR
> value LIKE 'name2%')
> *******
> This works just fine, it retrieves records in 2-3 seconds. The problem
> is when I combine more predicates. For example:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE ((searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%'
> OR value LIKE 'name2%'))
> OR ((searchType = 2) AND (searchField = 3) AND (value LIKE
> 'anothervalue1%' OR value LIKE 'anothervalue2%'))
> *******
> this may take up to 30-40 seconds!
> Any suggestion?
> Thanks in advanced.
>|||You may want to try running the profiler and check suggestions under the
tuning wizard for the tables - it could be a lack of a good index.
--
Regards,
Jamie
"Gaspar" wrote:
> I have a table that has more than 54 million records and I'm searching
> records using the LIKE statement. I looking for ways to
> optimize/partition/etc. this table.
> This is the table structure:
> TABLE "SEARCHCACHE"
> Fields:
> - searchType int
> - searchField int
> - value varchar(500)
> - externalKey int
> For example, a simple search would be:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE (searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%' OR
> value LIKE 'name2%')
> *******
> This works just fine, it retrieves records in 2-3 seconds. The problem
> is when I combine more predicates. For example:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE ((searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%'
> OR value LIKE 'name2%'))
> OR ((searchType = 2) AND (searchField = 3) AND (value LIKE
> 'anothervalue1%' OR value LIKE 'anothervalue2%'))
> *******
> this may take up to 30-40 seconds!
> Any suggestion?
> Thanks in advanced.
>sql

Optimizing table with more than 54 million records

I have a table that has more than 54 million records and I'm searching
records using the LIKE statement. I looking for ways to
optimize/partition/etc. this table.
This is the table structure:
TABLE "SEARCHCACHE"
Fields:
- searchType int
- searchField int
- value varchar(500)
- externalKey int
For example, a simple search would be:
*******
SELECT TOP 100 *
FROM SearchCache
WHERE (searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%' OR
value LIKE 'name2%')
*******
This works just fine, it retrieves records in 2-3 seconds. The problem
is when I combine more predicates. For example:
*******
SELECT TOP 100 *
FROM SearchCache
WHERE ((searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%'
OR value LIKE 'name2%'))
OR ((searchType = 2) AND (searchField = 3) AND (value LIKE
'anothervalue1%' OR value LIKE 'anothervalue2%'))
*******
this may take up to 30-40 seconds!
Any suggestion?
Thanks in advanced.On Jul 30, 1:23 pm, Gaspar <gas...@.no-reply.com> wrote:
> I have a table that has more than 54 million records and I'm searching
> records using the LIKE statement. I looking for ways to
> optimize/partition/etc. this table.
> This is the table structure:
> TABLE "SEARCHCACHE"
> Fields:
> - searchType int
> - searchField int
> - value varchar(500)
> - externalKey int
> For example, a simple search would be:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE (searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%' OR
> value LIKE 'name2%')
> *******
> This works just fine, it retrieves records in 2-3 seconds. The problem
> is when I combine more predicates. For example:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE ((searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%'
> OR value LIKE 'name2%'))
> OR ((searchType = 2) AND (searchField = 3) AND (value LIKE
> 'anothervalue1%' OR value LIKE 'anothervalue2%'))
> *******
> this may take up to 30-40 seconds!
> Any suggestion?
> Thanks in advanced.
maybe you could consider full-text search on the table? this would
require afair rewriting queries though.|||The original solution used FullText but it didn't work for my scenario.
That's why I implemented this SearchCache table.
Thanks.
Piotr Rodak wrote:
> On Jul 30, 1:23 pm, Gaspar <gas...@.no-reply.com> wrote:
> maybe you could consider full-text search on the table? this would
> require afair rewriting queries though.
>|||Gaspar
1) Don't use TOP 100 command especially without ORDER BY clause
2) Please provide some details from an execution plan
"Gaspar" <gaspar@.no-reply.com> wrote in message
news:e1xYASq0HHA.4824@.TK2MSFTNGP02.phx.gbl...
>I have a table that has more than 54 million records and I'm searching
> records using the LIKE statement. I looking for ways to
> optimize/partition/etc. this table.
> This is the table structure:
> TABLE "SEARCHCACHE"
> Fields:
> - searchType int
> - searchField int
> - value varchar(500)
> - externalKey int
> For example, a simple search would be:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE (searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%' OR
> value LIKE 'name2%')
> *******
> This works just fine, it retrieves records in 2-3 seconds. The problem
> is when I combine more predicates. For example:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE ((searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%'
> OR value LIKE 'name2%'))
> OR ((searchType = 2) AND (searchField = 3) AND (value LIKE
> 'anothervalue1%' OR value LIKE 'anothervalue2%'))
> *******
> this may take up to 30-40 seconds!
> Any suggestion?
> Thanks in advanced.|||Gaspar,
Try unioning the result from independent statements.
select top 100 *
from
(
SELECT TOP 100 *
FROM SearchCache
WHERE ((searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%'
OR value LIKE 'name2%'))
union
SELECT TOP 100 *
FROM SearchCache
where ((searchType = 2) AND (searchField = 3) AND (value LIKE
'anothervalue1%' OR value LIKE 'anothervalue2%'))
) as t
AMB
"Gaspar" wrote:

> I have a table that has more than 54 million records and I'm searching
> records using the LIKE statement. I looking for ways to
> optimize/partition/etc. this table.
> This is the table structure:
> TABLE "SEARCHCACHE"
> Fields:
> - searchType int
> - searchField int
> - value varchar(500)
> - externalKey int
> For example, a simple search would be:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE (searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%' OR
> value LIKE 'name2%')
> *******
> This works just fine, it retrieves records in 2-3 seconds. The problem
> is when I combine more predicates. For example:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE ((searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%'
> OR value LIKE 'name2%'))
> OR ((searchType = 2) AND (searchField = 3) AND (value LIKE
> 'anothervalue1%' OR value LIKE 'anothervalue2%'))
> *******
> this may take up to 30-40 seconds!
> Any suggestion?
> Thanks in advanced.
>|||You may want to try running the profiler and check suggestions under the
tuning wizard for the tables - it could be a lack of a good index.
--
Regards,
Jamie
"Gaspar" wrote:

> I have a table that has more than 54 million records and I'm searching
> records using the LIKE statement. I looking for ways to
> optimize/partition/etc. this table.
> This is the table structure:
> TABLE "SEARCHCACHE"
> Fields:
> - searchType int
> - searchField int
> - value varchar(500)
> - externalKey int
> For example, a simple search would be:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE (searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%' OR
> value LIKE 'name2%')
> *******
> This works just fine, it retrieves records in 2-3 seconds. The problem
> is when I combine more predicates. For example:
> *******
> SELECT TOP 100 *
> FROM SearchCache
> WHERE ((searchType = 2) AND (searchField = 2) AND (value LIKE 'name1%'
> OR value LIKE 'name2%'))
> OR ((searchType = 2) AND (searchField = 3) AND (value LIKE
> 'anothervalue1%' OR value LIKE 'anothervalue2%'))
> *******
> this may take up to 30-40 seconds!
> Any suggestion?
> Thanks in advanced.
>

Optimizing table in mssql

Hi there,
Is there an optimize table command in mssql which will work the same way as "OPTIMIZE TABLE tablename" of mysql?

I have a php application that should work on both mysql and mssql. To do defragmentation, I am using the above command. Is there an equivalent in mssql?

Cheers,
CeliaI presume optimise table defragments the table and indexes? If not - please post what it means.

To defrag an index check out the following in BoL:
2000
- DBCC DBREINDEX
- DBCC INDEXDEFRAG
2005
- ALTER INDEX ... REORGANISE\ REBUILD

You will need to see which options are best for your environment.

optimizing stored procedures

hi all,
i'm working in a database, i'm not the creater of it, and was assigned to optimize the stored procedures running against this database.
can someone tell me what steps should i follow to check these SP's and to what should i look at and what can be done to optimise them.

thanks expertsIt might be a two line question but it is a massive subject. People make their careers out of this.

read stuf here:
http://www.sql-server-performance.com/
Learn to read execution plans.
Learn about SET STATISTICS IO
Learn to use profiler.
Read BoL (section on Optimising Database Performance).
Learn your database.
Learn about indexing strategies.|||First make sure all the critical joins are indexed. Then eliminate any cursors or loops.|||I usually I just ask the QA people what parts of the app sucks the will to live out of them and then I trace those screens to identify the really big pieces of suck ass code and I look at the execution plans of those queries. I like being reactive.|||exactly. the ONLY sure way to find where the perf problems are is to measure.

ask any dev where the most time is spent in their code, and 90% will give you the wrong answer. the other 10% are right only because they have profiled it. :)|||I would trac it all to a table, th query the damn thing for the longest running pieces of code...

I guess we could make you look like a hero, sinve it seems no one else knows what's going on.

How big is this company?

And how what's your experience level

And do you have SQL Server client tools installed

Do you know what books online is?

Do you know what Google is?|||I guess we could make you look like a hero
...
Do you know what Google is?Downgrading as you type.|||There are some good articles about perf tuning here:

http://www.sql-server-performance.com/articles_performance.asp
http://www.sql-server-performance.com/articles_audit.asp

in particular this one will show you how to use profiler for catching the biggest pigs in your database:
http://www.sql-server-performance.com/sql_server_performance_audit10.asp

and elsewhere on that site.

Be aware, however, sql is not the only process that can kill your perf. compiled code can be poorly written too. for compiled code you need to use a different profiling tool (I usually use perf studio in VS).|||I trace all the queries to a table, as Brett mentioned. Then query it for the total number of reads grouped by say the first 50 or 100 characters of the textdata field. I find it is usually better to tune the average running query that runs 1,000's of times per day than the one pig that runs once at midnight. With all the variations that the textdata field can have, getting that aggregate right is a challenge, though.|||...
Downgrading as you type.

Are you saying that you should assume something about someone's experience?|||Nope - but if you take a poster on a forum from not knowing what google is to being a hero dba then I will eat all my hats.

Anyway yes - the OP needs to identify the sprocs to tune and then needs to learn the skills to tune them. Or post them here of course.

Optimizing Store Procedure and Avoiding Cursors....

I'm trying to optimize an ETL batch procedure which takes an extremely long
time to run. I was using cursors for most part of the batch script on
different Store Procedures. Now I'm trying to remove the cursors and replace
them with alternative methods such as using temp tables.
I'm really puzzled as to why the cursor procedure runs much faster than
using temp tables... can anyone point out what is wrong to me?
Below is a snippet of 2 test SPs I've compared against.
TestTable holds 200000 records with 3 columns
1. Using Cursors (This procedure took 4 - 8 secs to run)
declare @.test as numeric
declare @.header as nvarchar(50)
declare @.body as nvarchar(50)
declare @.footer as nvarchar(50)
declare test_cursor cursor for select header, body, footer from testTable
order by header
open test_cursor fetch next from test_cursor into @.header, @.body, @.footer
set @.test = 0
while @.@.fetch_status = 0
begin
set @.test = @.test + 1
fetch next from test_cursor into @.header, @.body, @.footer
end
select @.test
close test_cursor
deallocate test_cursor
2. Using Temp Tables (This procedure took > 1 min)
declare @.test as numeric
declare @.header as nvarchar(50)
declare @.body as nvarchar(50)
declare @.footer as nvarchar(50)
declare @.count int, @.loopcount int
select identity(int,1,1) id, header, body, footer into #temp from testtable
select @.count = @.@.rowcount
set @.loopcount = 1
while @.loopcount <= @.count
begin
select @.header = header, @.body = body, @.footer = footer from
#temp where id = @.loopcount
set @.loopcount = @.loopcount + 1
end
drop table #tempLooking at the code I think it maybe the the where clause in the tempdb,
rember that it has to search through all 200000 records whereas with the
cursor your going through them from start to finish, no searching.
Peter
"Happiness is nothing more than good health and a bad memory."
Albert Schweitzer
"Nestor" wrote:

> I'm trying to optimize an ETL batch procedure which takes an extremely lon
g
> time to run. I was using cursors for most part of the batch script on
> different Store Procedures. Now I'm trying to remove the cursors and repla
ce
> them with alternative methods such as using temp tables.
> I'm really puzzled as to why the cursor procedure runs much faster than
> using temp tables... can anyone point out what is wrong to me?
> Below is a snippet of 2 test SPs I've compared against.
> TestTable holds 200000 records with 3 columns
> 1. Using Cursors (This procedure took 4 - 8 secs to run)
> declare @.test as numeric
> declare @.header as nvarchar(50)
> declare @.body as nvarchar(50)
> declare @.footer as nvarchar(50)
> declare test_cursor cursor for select header, body, footer from testTable
> order by header
> open test_cursor fetch next from test_cursor into @.header, @.body, @.footer
> set @.test = 0
> while @.@.fetch_status = 0
> begin
> set @.test = @.test + 1
> fetch next from test_cursor into @.header, @.body, @.footer
> end
> select @.test
> close test_cursor
> deallocate test_cursor
>
> 2. Using Temp Tables (This procedure took > 1 min)
> declare @.test as numeric
> declare @.header as nvarchar(50)
> declare @.body as nvarchar(50)
> declare @.footer as nvarchar(50)
> declare @.count int, @.loopcount int
> select identity(int,1,1) id, header, body, footer into #temp from testtab
le
> select @.count = @.@.rowcount
> set @.loopcount = 1
> while @.loopcount <= @.count
> begin
> select @.header = header, @.body = body, @.footer = footer from
> #temp where id = @.loopcount
> set @.loopcount = @.loopcount + 1
> end
> drop table #temp
>
>
>|||Both 1 and 2 are cursors! Looping through a temp table is just a cursor
in disguise. Unfortunately your sample code doesn't do anything useful
so it's impossible to suggest an alternative set-based solution without
using cursors. Your goal should be to avoid processing one row at a
time by ANY method.
If you need more help please post DDL, sample data INSERTs and show
your required end result.
http://www.aspfaq.com/etiquette.asp?id=5006
David Portas
SQL Server MVP
--|||Hello Nestor,
Going through your code, I dont see any reason why the #temp based approach
should be any better
than the Cursor based approach (if it is supposed to be).
The #temp based approach, Loops anyway with a while Loop which is simillar
to Cursor based method.
Normally, when we say we have to get rid of the cursor, we mean to replace
the Cursor based approach
with a SET BASED APPROACH and your #temp based approach still does record by
record. And as Peter says
for each iteration a select is executed against the table.
Hope this helps.
Gopi
"Nestor" <n3570r@.yahoo.com> wrote in message
news:ORU5F$IJFHA.1172@.TK2MSFTNGP12.phx.gbl...
> I'm trying to optimize an ETL batch procedure which takes an extremely
> long time to run. I was using cursors for most part of the batch script on
> different Store Procedures. Now I'm trying to remove the cursors and
> replace them with alternative methods such as using temp tables.
> I'm really puzzled as to why the cursor procedure runs much faster than
> using temp tables... can anyone point out what is wrong to me?
> Below is a snippet of 2 test SPs I've compared against.
> TestTable holds 200000 records with 3 columns
> 1. Using Cursors (This procedure took 4 - 8 secs to run)
> declare @.test as numeric
> declare @.header as nvarchar(50)
> declare @.body as nvarchar(50)
> declare @.footer as nvarchar(50)
> declare test_cursor cursor for select header, body, footer from testTable
> order by header
> open test_cursor fetch next from test_cursor into @.header, @.body, @.footer
> set @.test = 0
> while @.@.fetch_status = 0
> begin
> set @.test = @.test + 1
> fetch next from test_cursor into @.header, @.body, @.footer
> end
> select @.test
> close test_cursor
> deallocate test_cursor
>
> 2. Using Temp Tables (This procedure took > 1 min)
> declare @.test as numeric
> declare @.header as nvarchar(50)
> declare @.body as nvarchar(50)
> declare @.footer as nvarchar(50)
> declare @.count int, @.loopcount int
> select identity(int,1,1) id, header, body, footer into #temp from
> testtable
> select @.count = @.@.rowcount
> set @.loopcount = 1
> while @.loopcount <= @.count
> begin
> select @.header = header, @.body = body, @.footer = footer from
> #temp where id = @.loopcount
> set @.loopcount = @.loopcount + 1
> end
> drop table #temp
>
>|||Although I haven't tested it, I'd expect the pseudo-cursor would perform
much better if you add a unique clustered index on the #temp id column.
However, as David pointed out, changing a cursor to another iterative
technique isn't the ideal solution and might not be any faster than the
cursor.
We can't make any specific recommendations because the example you posted
does no productive work. If your actual process does something like execute
another proc for each row returned, the set-based solution would be to use
in-line set-based processing instead. This technique is often orders of
magnitude faster than an iterative approach.
Hope this helps.
Dan Guzman
SQL Server MVP
"Nestor" <n3570r@.yahoo.com> wrote in message
news:ORU5F$IJFHA.1172@.TK2MSFTNGP12.phx.gbl...
> I'm trying to optimize an ETL batch procedure which takes an extremely
> long time to run. I was using cursors for most part of the batch script on
> different Store Procedures. Now I'm trying to remove the cursors and
> replace them with alternative methods such as using temp tables.
> I'm really puzzled as to why the cursor procedure runs much faster than
> using temp tables... can anyone point out what is wrong to me?
> Below is a snippet of 2 test SPs I've compared against.
> TestTable holds 200000 records with 3 columns
> 1. Using Cursors (This procedure took 4 - 8 secs to run)
> declare @.test as numeric
> declare @.header as nvarchar(50)
> declare @.body as nvarchar(50)
> declare @.footer as nvarchar(50)
> declare test_cursor cursor for select header, body, footer from testTable
> order by header
> open test_cursor fetch next from test_cursor into @.header, @.body, @.footer
> set @.test = 0
> while @.@.fetch_status = 0
> begin
> set @.test = @.test + 1
> fetch next from test_cursor into @.header, @.body, @.footer
> end
> select @.test
> close test_cursor
> deallocate test_cursor
>
> 2. Using Temp Tables (This procedure took > 1 min)
> declare @.test as numeric
> declare @.header as nvarchar(50)
> declare @.body as nvarchar(50)
> declare @.footer as nvarchar(50)
> declare @.count int, @.loopcount int
> select identity(int,1,1) id, header, body, footer into #temp from
> testtable
> select @.count = @.@.rowcount
> set @.loopcount = 1
> while @.loopcount <= @.count
> begin
> select @.header = header, @.body = body, @.footer = footer from
> #temp where id = @.loopcount
> set @.loopcount = @.loopcount + 1
> end
> drop table #temp
>
>|||Dan,
Could you please share with me what "in-line set-based " means.
I was looking at a Stored Proc which has a nested cursors (One cursor inside
another cursor)
and the While loop of the second a SP is called with the two parameters
(each of them coming
from each of the Cursor code.
Here is the PSeudo-Code :
Declare Cursor Cursor1 Select * from table1
Open Cursor1
Fetch into @.var1
While @.@.FETCH_STATUS <> 1
Declare Cursor Cursor2 Select * from table2
Open Cursor2
Fetch into @.var2
While @.@.FETCH_STATUS <> 1
Exec Spname @.var1, @.var2
........
.....
DEALLOCATE Cursor1
DEALLOCATE Cursor2
I have noticed that the two cursors can be combined into one using
appropriate Joins.
However, I was under the impression I will still have to use a Cursor for
executing the SP
Could you please point me in the right direction if there is a way out.
Thanks,
Gopi
<< If your actual process does something like execute another proc for each
row returned, the set-based
solution would be to use in-line set-based processing instead>>
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:OXOup8KJFHA.4092@.tk2msftngp13.phx.gbl...
> Although I haven't tested it, I'd expect the pseudo-cursor would perform
> much better if you add a unique clustered index on the #temp id column.
> However, as David pointed out, changing a cursor to another iterative
> technique isn't the ideal solution and might not be any faster than the
> cursor.
> We can't make any specific recommendations because the example you posted
> does no productive work. If your actual process does something like
> execute another proc for each row returned, the set-based solution would
> be to use in-line set-based processing instead. This technique is often
> orders of magnitude faster than an iterative approach.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Nestor" <n3570r@.yahoo.com> wrote in message
> news:ORU5F$IJFHA.1172@.TK2MSFTNGP12.phx.gbl...
>|||> However, I was under the impression I will still have to use a Cursor
for
> executing the SP
Yes, using an SP in this way forces you to use the iterative row-by-row
approach. The set-based solution is to perform whatever logic is
contained in SPname using a query against your tables so that the
entire operation is performed against the data set rather than one row
at a time. For data manipulation operations that usually has advantages
over the cursor in terms of performance, scalability and
maintainability.
David Portas
SQL Server MVP
--|||> Here is the PSeudo-Code :
Your pseudo-code effectively calls the proc for each row returned by the
Cartesian product of table1 and table2. Assuming that the executed proc
simply inserts into a third table, a working code example:
CREATE TABLE table1
(
Col1 int NOT NULL PRIMARY KEY
)
INSERT INTO table1 VALUES(1)
INSERT INTO table1 VALUES(2)
INSERT INTO table1 VALUES(3)
CREATE TABLE table2
(
Col1 int NOT NULL PRIMARY KEY
)
INSERT INTO table2 VALUES(4)
INSERT INTO table2 VALUES(5)
INSERT INTO table2 VALUES(6)
CREATE TABLE table3
(
Col1 int NOT NULL,
Col2 int NOT NULL,
PRIMARY KEY (Col1, Col2)
)
GO
CREATE PROC Spname
@.var1 int,
@.var2 int
AS
SET NOCOUNT ON
INSERT INTO Table3(Col1, Col2)
VALUES(@.var1, @.var2)
GO
DECLARE @.var1 int,
@.var2 int
DECLARE Cursor1 CURSOR
LOCAL FAST_FORWARD READ_ONLY FOR
SELECT * FROM table1
OPEN Cursor1
FETCH NEXT FROM Cursor1 INTO @.var1
WHILE @.@.FETCH_STATUS <> -1
BEGIN
DECLARE Cursor2 CURSOR LOCAL FOR SELECT * FROM table2
OPEN Cursor2
FETCH NEXT FROM Cursor2 INTO @.var2
WHILE @.@.FETCH_STATUS <> -1
BEGIN
EXEC Spname @.var1, @.var2
FETCH NEXT FROM Cursor2 INTO @.var2
END
CLOSE Cursor2
DEALLOCATE Cursor2
FETCH NEXT FROM Cursor1 INTO @.var1
END
CLOSE Cursor1
DEALLOCATE Cursor1
GO

> I have noticed that the two cursors can be combined into one using
> appropriate Joins.
Exactly. That's the first step.

> However, I was under the impression I will still have to use a Cursor for
> executing the SP
Yes you will. This is why I suggested including the equivalent proc code
inline *instead of* executing the proc for each row. The entire cursor
script and proc code in my functional example could be replaced with a
single insert statement:
INSERT INTO Table3(Col1, Col2)
SELECT t1.Col1, t2.Col1
FROM table1 t1
CROSS JOIN table2 t2
Of course, a lot depends on exactly what the called proc does. Also, your
actual code probably isn't a CROSS JOIN. A simple INNER JOIN example:
INSERT INTO Table3(Col1, Col2)
SELECT t1.Col1, t2.Col1
FROM table1 t1
JOIN table2 t2 ON
t1.Col1 = t2.SomeColumn
If you need more help, you need to provide details on what the proc actually
does. Actual code will help.
Hope this helps.
Dan Guzman
SQL Server MVP
"rgn" <gopinathr@.healthasyst.com> wrote in message
news:%23Sx92CLJFHA.1948@.TK2MSFTNGP14.phx.gbl...
> Dan,
> Could you please share with me what "in-line set-based " means.
> I was looking at a Stored Proc which has a nested cursors (One cursor
> inside another cursor)
> and the While loop of the second a SP is called with the two parameters
> (each of them coming
> from each of the Cursor code.
> Here is the PSeudo-Code :
> Declare Cursor Cursor1 Select * from table1
> Open Cursor1
> Fetch into @.var1
> While @.@.FETCH_STATUS <> 1
>
> Declare Cursor Cursor2 Select * from table2
> Open Cursor2
> Fetch into @.var2
> While @.@.FETCH_STATUS <> 1
> Exec Spname @.var1, @.var2
> ........
> .....
> DEALLOCATE Cursor1
> DEALLOCATE Cursor2
> I have noticed that the two cursors can be combined into one using
> appropriate Joins.
> However, I was under the impression I will still have to use a Cursor for
> executing the SP
> Could you please point me in the right direction if there is a way out.
> Thanks,
> Gopi
> << If your actual process does something like execute another proc for
> each row returned, the set-based
> solution would be to use in-line set-based processing instead>>
>
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:OXOup8KJFHA.4092@.tk2msftngp13.phx.gbl...
>|||Thanks for pointing that out. I've got the code base from an article in
http://www.extremeexperts.com/SQL/A...TSQLResult.aspx
Am I doing it wrong with instructions from that article? In any case, what
exactly is SET BASED APPROACH? I do realise and know that SQL Server is
meant to be relational and not sequential, but I need to understand more on
how to apply the alternative first before trying to overhaul the scripts
running in the production servers.
Can anyone give me an equivalent of the cursor procedure I posted using SET
BASE APPROACH so that I can better relate?
Thanks.
Regards,
Nestor
"rgn" <gopinathr@.healthasyst.com> wrote in message
news:e$w%23rZJJFHA.2604@.TK2MSFTNGP15.phx.gbl...
> Hello Nestor,
> Going through your code, I dont see any reason why the #temp based
> approach should be any better
> than the Cursor based approach (if it is supposed to be).
> The #temp based approach, Loops anyway with a while Loop which is simillar
> to Cursor based method.
> Normally, when we say we have to get rid of the cursor, we mean to replace
> the Cursor based approach
> with a SET BASED APPROACH and your #temp based approach still does record
> by record. And as Peter says
> for each iteration a select is executed against the table.
> Hope this helps.
> Gopi
>
> "Nestor" <n3570r@.yahoo.com> wrote in message
> news:ORU5F$IJFHA.1172@.TK2MSFTNGP12.phx.gbl...
>|||The article demonstrates some techniques for iteration but doesn't give
any recommendations as to when to use them. 99.99% of the time
row-by-row processing isn't necessary so those techniques shouldn't be
required. The Set Based approach means that you use SQL DML statements
(basically SELECT, UPDATE, INSERT, DELETE) that operate on a SET of
rows at a time rather than ONE row at a time. That should always be the
standard, preferred way to write code.

> Can anyone give me an equivalent of the cursor procedure I posted
using SET
> BASE APPROACH so that I can better relate?
But your cursor didn't do anything except assign a few variables. How
can we show you how to replace it if we don't know what it's supposed
to do? What we need to know is what you are actually trying to achieve
then we can suggest the right solution. It may even be that there isn't
a set-based solution to your problem - that's rarely the case but we
can't tell you until we understand the problem.
The following article explains the best way to post your problem here:
http://www.aspfaq.com/etiquette.asp?id=5006
David Portas
SQL Server MVP
--

Optimizing store proc

I have the following store proc and was wondering if I can optimized it by using a SELECT CASE instead of all those IF? I tried but don't know how to write it.

Thanks

set ANSI_NULLSONset QUOTED_IDENTIFIERONgoALTER PROCEDURE [dbo].[Get_Cl_SearchMultiColumn]( @.strSearchTermColumnNamenvarchar (50),@.strSearchTermSearchTermnvarchar (200) )as if (@.strSearchTermColumnName ='Monitor')begin SELECT CustomerID,SystemId,CompanyName,City,State,Country,Monitor1,Monitor2FROM Cl_SystemsWHERE contains(Monitor1,@.strSearchTerm)orcontains(Monitor2,@.strSearchTerm)return 0end if (@.strSearchTermColumnName ='MonitorSerial')begin SELECT CustomerID,SystemId,CompanyName,City,State,Country,Monitor1Serial,Monitor2SerialFROM Cl_SystemsWHERE contains(Monitor1Serial,@.strSearchTerm)orcontains(Monitor2Serial,@.strSearchTerm)return 0end if (@.strSearchTermColumnName ='Microscope')begin SELECT CustomerID,SystemId,CompanyName,City,State,Country,Microscope1,Microscope2FROM Cl_SystemsWHERE contains(Microscope1,@.strSearchTerm)orcontains(Microscope2,@.strSearchTerm)return 0end if (@.strSearchTermColumnName ='SerialMicroscope')begin SELECT CustomerID,SystemId,CompanyName,City,State,Country,SerialMicroscope1,SerialMicroscope2FROM Cl_SystemsWHERE contains(SerialMicroscope1,@.strSearchTerm)orcontains(SerialMicroscope2,@.strSearchTerm)return 0end if (@.strSearchTermColumnName ='Controller')begin SELECT CustomerID,SystemId,CompanyName,City,State,Country,Controller1,Controller2FROM Cl_SystemsWHERE contains(Controller1,@.strSearchTerm)orcontains(Controller2,@.strSearchTerm)return 0end if (@.strSearchTermColumnName ='ControllerFirmware')begin SELECT CustomerID,SystemId,CompanyName,City,State,Country,Cont1Firmware,Cont2FirmwareFROM Cl_SystemsWHERE contains(Cont1Firmware,@.strSearchTerm)orcontains(Cont2Firmware,@.strSearchTerm)return 0end if (@.strSearchTermColumnName ='SerialController')begin SELECT CustomerID,SystemId,CompanyName,City,State,Country,SerialController1,SerialController2FROM Cl_SystemsWHERE contains(SerialController1,@.strSearchTerm)orcontains(SerialController2,@.strSearchTerm)return 0end if (@.strSearchTermColumnName ='Joystick')begin SELECT CustomerID,SystemId,CompanyName,City,State,Country,Joystick1,Joystick2FROM Cl_SystemsWHERE contains(Joystick1,@.strSearchTerm)orcontains(Joystick2,@.strSearchTerm)return 0end if (@.strSearchTermColumnName ='JoystickFirmware')begin SELECT CustomerID,SystemId,CompanyName,City,State,Country,Joy1Firmware,Joy2FirmwareFROM Cl_SystemsWHERE contains(Joy1Firmware,@.strSearchTerm)orcontains(Joy2Firmware,@.strSearchTerm)return 0end if (@.strSearchTermColumnName ='SerialJoystick')begin SELECT CustomerID,SystemId,CompanyName,City,State,Country,SerialJoystick1,SerialJoystick2FROM Cl_SystemsWHERE contains(SerialJoystick1,@.strSearchTerm)orcontains(SerialJoystick2,@.strSearchTerm)return 0end if (@.strSearchTermColumnName ='Camera')begin SELECT CustomerID,SystemId,CompanyName,City,State,Country,Camera1,Camera2,Camera3,Camera4FROM Cl_SystemsWHERE contains(Camera1,@.strSearchTerm)orcontains(Camera2,@.strSearchTerm)orcontains(Camera3,@.strSearchTerm)orcontains(Camera4,@.strSearchTerm)return 0end if (@.strSearchTermColumnName ='CameraSerial')begin SELECT CustomerID,SystemId,CompanyName,City,State,Country,Camera1Serial,Camera2Serial,Camera3Serial,Camera4SerialFROM Cl_SystemsWHERE contains(Camera1Serial,@.strSearchTerm)orcontains(Camera2Serial,@.strSearchTerm)orcontains(Camera3Serial,@.strSearchTerm)orcontains(Camera4Serial,@.strSearchTerm)return 0end if (@.strSearchTermColumnName ='ZMotor')begin SELECT CustomerID,SystemId,CompanyName,City,State,Country,ZMotor1,ZMotor2,ZMotor3FROM Cl_SystemsWHERE contains(ZMotor1,@.strSearchTerm)orcontains(ZMotor2,@.strSearchTerm)orcontains(ZMotor3,@.strSearchTerm)return 0end if (@.strSearchTermColumnName ='Stage')begin SELECT CustomerID,SystemId,CompanyName,City,State,Country,Stage1,Stage2,Stage3FROM Cl_SystemsWHERE contains(Stage1,@.strSearchTerm)orcontains(Stage2,@.strSearchTerm)orcontains(Stage3,@.strSearchTerm)return 0end if (@.strSearchTermColumnName ='Lens')begin SELECT CustomerID,SystemId,CompanyName,City,State,Country,Lens1,Lens2,Lens3FROM Cl_SystemsWHERE contains(Lens1,@.strSearchTerm)orcontains(Lens2,@.strSearchTerm)orcontains(Lens3,@.strSearchTerm)return 0end

I don't know why you need a WHERE clause with CONTAINS predicate but try the link below for CASE statement for an expert. Hope this helps.

http://www.craigsmullins.com/ssu_0899.htm

|||

Well it seems that I am trying to use the CASE in a different situation then what explain everywhere.

I would like to use it one of the parameters sent to my store proc but I am not sure if it is possible and how to write it

Something like:

Select Case @.strSearchTermColumnName

Case 'Monitor'
begin
SELECT CustomerID,SystemId,CompanyName,City,State,Country,Monitor1,Monitor2
FROM Cl_Systems
WHERE contains(Monitor1,@.strSearchTerm) or contains(Monitor2,@.strSearchTerm)
return 0
end

case 'MonitorSerial'
begin
SELECT CustomerID,SystemId,CompanyName,City,State,Country,Monitor1Serial,Monitor2Serial
FROM Cl_Systems
WHERE contains(Monitor1Serial,@.strSearchTerm) or contains(Monitor2Serial,@.strSearchTerm)
return 0
end
...

End Select

|||

Oups I made a few mistakes in my last statement, I should have been:

Well it seems that I am trying to use the CASE in a different situation than what explained everywhere.

I would like to use it on one of the parameters sent to my store proc but I am not sure if it is possible and how to write it

Something like:

Select Case @.strSearchTerm

Case 'Monitor'
begin
SELECT CustomerID,SystemId,CompanyName,City,State,Country,Monitor1,Monitor2
FROM Cl_Systems
WHERE contains(Monitor1,@.strSearchTerm) or contains(Monitor2,@.strSearchTerm)
return 0
end

case 'MonitorSerial'
begin
SELECT CustomerID,SystemId,CompanyName,City,State,Country,Monitor1Serial,Monitor2Serial
FROM Cl_Systems
WHERE contains(Monitor1Serial,@.strSearchTerm) or contains(Monitor2Serial,@.strSearchTerm)
return 0
end
...

End Select

|||

What I am saying is I don't know if you can use the CONTAINS predicate with CASE statement. The links below one is a full text expert he knows more about fulltext and the other two are the two versions of CONTAINS. Hope this helps.

http://spaces.msn.com/jtkane/

http://msdn2.microsoft.com/en-US/library/ms189760.aspx

http://msdn2.microsoft.com/en-US/library/ms187787.aspx

sql

Optimizing Star Schemas

Hello All,
I was asked this question : " How will you go about optimizing Star Schemas
?". I understand data warehouses and understand what a Star Schema is.
However, I was not sure how to address this question.
Can someone point me in the right direction.
Thanks,
rgnHello Gopi,
I guess it might be good to qualify what you mean by "Optimizing Star
Schemas"
In my view there are two main points of Optimization
1. ETL: The process of populating the Star Schema.
2. Star Schema Query performance: The creation of reports and ad hoc
queries.
I am assuming you are really interested in Star Schema Query
performance. I am also assuming you are using surrogate keys. Here are
some basic rules.
1. Make sure all dimensions PK keys have a clustered index that is
used for joining to the fact table.
2. Make sure the Fact table has a good covering index for all the
corresponding dimension keys. A cluster index would be preferable.
3. Check the covering index is in the right order for most common
queries.
I would recommend that you look at using Analysis Services 2005 as your
main query engine. It will give you the best query performance by far.
Hope this helps,
Myles Matheson
Data Warehouse Architect
http://bi-on-sql-server.blogspot.com/

Optimizing SQL Query performance

Hi,
I have collected SQL Trace events into a trace table by exporting the trace
into a table.
I have a table Trace1 with following columns:
-RowNumber
-ApplicationName
-DatabaseName
.
.
.
-StartTime
-EndTime
Clustered Index on RowNumber Column.
NonClustered Index on StartTime Column.
Trace1 table contains 100,000 of rows.
Now I am firing a query something like
"select * from Trace1 where StartTime > 'Date1' and StartTime < 'Date2'"
But above query gives me timeout error for most of the cases. I have
specified timeout value as 60 seconds.
How can I solve this timeout issue?
Do I need to have some other proper indexes, if current indexes are not
proper?
Do I need to increase the timeout value? What is the optimum value of
Timeout in such scenarios? I am expecting that this table is going to grow
to contain atleast 5 crores of rows. So please suggest what strategy should
I adopt?
Thanks,
PushkarIf you have an index on StartTime, this should be the most efficient way to
retrieve the data. I would not expect 100,000 rows to take more than a
moment. Even without an index, I would expect the query to finish in
seconds.
Check your execution plan and see if it is using the index.
You could try regenerating your statistics on this table, which should get
it to use this index.
Post the full DDL of your table including the indexes themselves, just so we
are perfectly clear on what you have.
http://www.aspfaq.com/etiquette.asp?id=5006
I think the likely culprit here is the use of "Select * ". If this is a
very wide table, you may be timing out moving all of that data across the
network. Also, how many rows does your typical date range select? If you
usually return 90,000 rows, that makes a big difference. If every row
contains 1 meg of data (an extreme case, just to illustrate a point), for
example, that would be 90 gigs moving over the network, and would timeout
every time.
Also, what application are you using to run the query?
"Pushkar" <pushkartiwari@.gmail.com> wrote in message
news:%23aMdJOHaGHA.1192@.TK2MSFTNGP03.phx.gbl...
> Hi,
> I have collected SQL Trace events into a trace table by exporting the
trace
> into a table.
> I have a table Trace1 with following columns:
> -RowNumber
> -ApplicationName
> -DatabaseName
> .
> .
> .
> -StartTime
> -EndTime
> Clustered Index on RowNumber Column.
> NonClustered Index on StartTime Column.
> Trace1 table contains 100,000 of rows.
> Now I am firing a query something like
> "select * from Trace1 where StartTime > 'Date1' and StartTime < 'Date2'"
> But above query gives me timeout error for most of the cases. I have
> specified timeout value as 60 seconds.
> How can I solve this timeout issue?
> Do I need to have some other proper indexes, if current indexes are not
> proper?
> Do I need to increase the timeout value? What is the optimum value of
> Timeout in such scenarios? I am expecting that this table is going to grow
> to contain atleast 5 crores of rows. So please suggest what strategy
should
> I adopt?
> Thanks,
> Pushkar
>
>
>|||Pushkar,
It depends on the relative amount of rows that the query returns.
I would start with adding a (nonclustered) index on StartTime. If the
query returns just a few percent of all rows and tables rows are
relatively wide as compared to the StartTime column, then this index
will probably be used.
If the query returns more than a few percent, or the rows are narrow,
then a nonclustered index on StartTime might be ignored. If this query
is one of the most important queries in your system, then you could
create a clustered index on StartTime (you will need to change the
current clustered index to nonclustered).
HTH,
Gert-Jan
Pushkar wrote:
> Hi,
> I have collected SQL Trace events into a trace table by exporting the trac
e
> into a table.
> I have a table Trace1 with following columns:
> -RowNumber
> -ApplicationName
> -DatabaseName
> .
> .
> .
> -StartTime
> -EndTime
> Clustered Index on RowNumber Column.
> NonClustered Index on StartTime Column.
> Trace1 table contains 100,000 of rows.
> Now I am firing a query something like
> "select * from Trace1 where StartTime > 'Date1' and StartTime < 'Date2'"
> But above query gives me timeout error for most of the cases. I have
> specified timeout value as 60 seconds.
> How can I solve this timeout issue?
> Do I need to have some other proper indexes, if current indexes are not
> proper?
> Do I need to increase the timeout value? What is the optimum value of
> Timeout in such scenarios? I am expecting that this table is going to grow
> to contain atleast 5 crores of rows. So please suggest what strategy shoul
d
> I adopt?
> Thanks,
> Pushkar

Optimizing SQL Query

Hello,
I have to run a script which will delete records from 10 tables
based on a certain condition. Instead of having to run the condition 10
times I thought running it once would have better performance and did
something like this
CREATE PROCEDURE testScript AS
set nocount on
declare @.script_table table (row_id int identity(1,1), sid int)
declare @.max int, @.now int, @.result varchar(100)
insert into @.script_table
select sid from Alpha where lname like 'DOWN' and (fname like 'Tom' or
fname like 'Seinfeld')
select @.max = max(row_id) from @.script_table
select @.now=2, @.result=convert(varchar(6), sid) from @.script_table
where row_id=1
while @.max >=@.now
Begin
select @.result = @.result + ',' + convert(varchar(6), sid) from
@.script_table where row_id=@.now
set @.now=@.now + 1;
End
select * from Beta where convert(varchar(5), sid) in ( @.result )
if @.@.error <> 0
BEGIN
ROLLBACK Transaction
RAISERROR ( 'E', 2, 127)
END
...
...
...
but when I run this I dont get any values. Instead when I run this
query I get the output rows
select * from Beta where convert(varchar(5), sid) in (select sid from
Alpha where lname like 'DOWN' and (fname like 'Tom' or fname like
'Seinfeld'))
since @.result has the output from Alpha with a comma delimiter I was
under the impression that this should give me the result but instead I
dont get any rows. Is this because @.result is a varchar? Am I doing
something wrong here? Any help would be great..
Thanks
Khris> Is this because @.result is a varchar?
Yes. You can, however, parse a delimited string and have values returned in
a table, that you can use in your query.
My favourite solution is this one (by Dejan Sarka):
http://solidqualitylearning.com/blo.../10/22/200.aspx
ML
http://milambda.blogspot.com/|||Thanks ML, I will try that out|||Thanks ML I will try that out
Khris

Optimizing SQL Filtering - Conceptual question

I expose a simple example of what I'm trying to figure it out:
table1:
====
field1: PK, indexed
field2: FK, indexed
field3: numeric field, non indexed
field4: date field, non indexed
I want to know if this SQL is slower than the second:
select ... from table
where field3 = value // non indexed field first
and field1 = value // indexed field last
select ... from table
where field1 = value // indexed field first
and field3 = value // non indexed field last
Which is faster? Does it matter which field is before? Or the server
organize the fields automatically?
Is the order of fields in the 'where' part important to the query execution
speed?
If I put indexed fields first in the list, the query is executing faster
than if I put an non indexed field first?
I'm using SQL Server 2000.
Thank you very much
Daniel E. Alvarez
IMS Soluciones Tecnolgicas S.A.
quilate@.kropol.com.ar
Daniel,
The order of fields in the WHERE clause should not affect the order that the
optimizer uses.
Are you seeing that behavior? (I was unclear from your text below.) I
would suggest rerunning your tests several times and take timings from each
test cycle.
Russell Fields
(For a very complex set of joins the optimizer must eventually stop
optimizing and execute. In a case like that, the order of the FROM and
WHERE clause MAY have the side-effect of changing where the optimizer
decides to move on.)
"Daniel Alvarez" <dalvarez@.flashmail.com> wrote in message
news:#NluJkTKEHA.1156@.TK2MSFTNGP09.phx.gbl...
> I expose a simple example of what I'm trying to figure it out:
> table1:
> ====
> field1: PK, indexed
> field2: FK, indexed
> field3: numeric field, non indexed
> field4: date field, non indexed
> I want to know if this SQL is slower than the second:
> select ... from table
> where field3 = value // non indexed field first
> and field1 = value // indexed field last
> select ... from table
> where field1 = value // indexed field first
> and field3 = value // non indexed field last
> Which is faster? Does it matter which field is before? Or the server
> organize the fields automatically?
> Is the order of fields in the 'where' part important to the query
execution
> speed?
> If I put indexed fields first in the list, the query is executing faster
> than if I put an non indexed field first?
> I'm using SQL Server 2000.
> Thank you very much
> --
> Daniel E. Alvarez
> IMS Soluciones Tecnolgicas S.A.
> quilate@.kropol.com.ar
>
sql

Optimizing SQL Filtering - Conceptual question

I expose a simple example of what I'm trying to figure it out:
table1:
====
field1: PK, indexed
field2: FK, indexed
field3: numeric field, non indexed
field4: date field, non indexed
I want to know if this SQL is slower than the second:
select ... from table
where field3 = value // non indexed field first
and field1 = value // indexed field last
select ... from table
where field1 = value // indexed field first
and field3 = value // non indexed field last
Which is faster? Does it matter which field is before? Or the server
organize the fields automatically?
Is the order of fields in the 'where' part important to the query execution
speed?
If I put indexed fields first in the list, the query is executing faster
than if I put an non indexed field first?
I'm using SQL Server 2000.
Thank you very much
Daniel E. Alvarez
IMS Soluciones Tecnolgicas S.A.
quilate@.kropol.com.arDaniel,
The order of fields in the WHERE clause should not affect the order that the
optimizer uses.
Are you seeing that behavior? (I was unclear from your text below.) I
would suggest rerunning your tests several times and take timings from each
test cycle.
Russell Fields
(For a very complex set of joins the optimizer must eventually stop
optimizing and execute. In a case like that, the order of the FROM and
WHERE clause MAY have the side-effect of changing where the optimizer
decides to move on.)
"Daniel Alvarez" <dalvarez@.flashmail.com> wrote in message
news:#NluJkTKEHA.1156@.TK2MSFTNGP09.phx.gbl...
> I expose a simple example of what I'm trying to figure it out:
> table1:
> ====
> field1: PK, indexed
> field2: FK, indexed
> field3: numeric field, non indexed
> field4: date field, non indexed
> I want to know if this SQL is slower than the second:
> select ... from table
> where field3 = value // non indexed field first
> and field1 = value // indexed field last
> select ... from table
> where field1 = value // indexed field first
> and field3 = value // non indexed field last
> Which is faster? Does it matter which field is before? Or the server
> organize the fields automatically?
> Is the order of fields in the 'where' part important to the query
execution
> speed?
> If I put indexed fields first in the list, the query is executing faster
> than if I put an non indexed field first?
> I'm using SQL Server 2000.
> Thank you very much
> --
> Daniel E. Alvarez
> IMS Soluciones Tecnolgicas S.A.
> quilate@.kropol.com.ar
>

Optimizing SQL Filtering - Conceptual question

I expose a simple example of what I'm trying to figure it out:
table1:
==== field1: PK, indexed
field2: FK, indexed
field3: numeric field, non indexed
field4: date field, non indexed
I want to know if this SQL is slower than the second:
select ... from table
where field3 = value // non indexed field first
and field1 = value // indexed field last
select ... from table
where field1 = value // indexed field first
and field3 = value // non indexed field last
Which is faster? Does it matter which field is before? Or the server
organize the fields automatically?
Is the order of fields in the 'where' part important to the query execution
speed?
If I put indexed fields first in the list, the query is executing faster
than if I put an non indexed field first?
I'm using SQL Server 2000.
Thank you very much
--
Daniel E. Alvarez
IMS Soluciones Tecnológicas S.A.
quilate@.kropol.com.arDaniel,
The order of fields in the WHERE clause should not affect the order that the
optimizer uses.
Are you seeing that behavior? (I was unclear from your text below.) I
would suggest rerunning your tests several times and take timings from each
test cycle.
Russell Fields
(For a very complex set of joins the optimizer must eventually stop
optimizing and execute. In a case like that, the order of the FROM and
WHERE clause MAY have the side-effect of changing where the optimizer
decides to move on.)
"Daniel Alvarez" <dalvarez@.flashmail.com> wrote in message
news:#NluJkTKEHA.1156@.TK2MSFTNGP09.phx.gbl...
> I expose a simple example of what I'm trying to figure it out:
> table1:
> ====> field1: PK, indexed
> field2: FK, indexed
> field3: numeric field, non indexed
> field4: date field, non indexed
> I want to know if this SQL is slower than the second:
> select ... from table
> where field3 = value // non indexed field first
> and field1 = value // indexed field last
> select ... from table
> where field1 = value // indexed field first
> and field3 = value // non indexed field last
> Which is faster? Does it matter which field is before? Or the server
> organize the fields automatically?
> Is the order of fields in the 'where' part important to the query
execution
> speed?
> If I put indexed fields first in the list, the query is executing faster
> than if I put an non indexed field first?
> I'm using SQL Server 2000.
> Thank you very much
> --
> Daniel E. Alvarez
> IMS Soluciones Tecnológicas S.A.
> quilate@.kropol.com.ar
>

Optimizing SQL 2000 SP4 for large amount of memory

Hi
We will within the next week move our SQL 2000 server SP4 to a new high
performance server (2 x ghz XEON, 4 GB RAM)
The database which is to be running on the server has a physical size of 50
mb on disk.
Is there a way to optimize the SQL 2000 to keep as much as possible
in-memory so searching will as quick as possible? The read / write rate is
approx 1000/1 (we read 1000 x as often as writing)
Is the some memory settings we can tweak or are we better of letting sql
server 2000 handle it?
Thanks in regards
Anders JacobsenJust checking 50Mb not 50 Gb. If so then there's not much you can do, with
4Gb of RAM it will all reside in memory.
--
Simon Sabin
SQL Server MVP
http://sqljunkies.com/weblog/simons
"Anders" <anderskj1@.yahoo.dk> wrote in message
news:%23j7EngYXGHA.4432@.TK2MSFTNGP04.phx.gbl...
> Hi
> We will within the next week move our SQL 2000 server SP4 to a new high
> performance server (2 x ghz XEON, 4 GB RAM)
> The database which is to be running on the server has a physical size of
> 50 mb on disk.
> Is there a way to optimize the SQL 2000 to keep as much as possible
> in-memory so searching will as quick as possible? The read / write rate is
> approx 1000/1 (we read 1000 x as often as writing)
> Is the some memory settings we can tweak or are we better of letting sql
> server 2000 handle it?
> Thanks in regards
> Anders Jacobsen
>|||"Anders" <anderskj1@.yahoo.dk> wrote in message
news:%23j7EngYXGHA.4432@.TK2MSFTNGP04.phx.gbl...
> Hi
> We will within the next week move our SQL 2000 server SP4 to a new high
> performance server (2 x ghz XEON, 4 GB RAM)
> The database which is to be running on the server has a physical size of
50
> mb on disk.
> Is there a way to optimize the SQL 2000 to keep as much as possible
> in-memory so searching will as quick as possible? The read / write rate is
> approx 1000/1 (we read 1000 x as often as writing)
> Is the some memory settings we can tweak or are we better of letting sql
> server 2000 handle it?
Generally you're better off letting SQL Server handle it. It will grab as
much RAM as it can (2GB with Standard, more with Enterprise with the proper
switches) and use it to cache.
Check perfmon and look at the cache hit ratio for one thing to see how
you're doing.
If it's truly 50MB, that'll fit into RAM absolutely w/o problems.
> Thanks in regards
> Anders Jacobsen
>|||> Generally you're better off letting SQL Server handle it. It will grab as
> much RAM as it can (2GB with Standard, more with Enterprise with the
> proper
> switches) and use it to cache.
> Check perfmon and look at the cache hit ratio for one thing to see how
> you're doing.
> If it's truly 50MB, that'll fit into RAM absolutely w/o problems.
Sounds great.
Thanks

Optimizing SQL 2000 SP4 for large amount of memory

Hi
We will within the next week move our SQL 2000 server SP4 to a new high
performance server (2 x ghz XEON, 4 GB RAM)
The database which is to be running on the server has a physical size of 50
mb on disk.
Is there a way to optimize the SQL 2000 to keep as much as possible
in-memory so searching will as quick as possible? The read / write rate is
approx 1000/1 (we read 1000 x as often as writing)
Is the some memory settings we can tweak or are we better of letting sql
server 2000 handle it?
Thanks in regards
Anders JacobsenJust checking 50Mb not 50 Gb. If so then there's not much you can do, with
4Gb of RAM it will all reside in memory.
Simon Sabin
SQL Server MVP
http://sqljunkies.com/weblog/simons
"Anders" <anderskj1@.yahoo.dk> wrote in message
news:%23j7EngYXGHA.4432@.TK2MSFTNGP04.phx.gbl...
> Hi
> We will within the next week move our SQL 2000 server SP4 to a new high
> performance server (2 x ghz XEON, 4 GB RAM)
> The database which is to be running on the server has a physical size of
> 50 mb on disk.
> Is there a way to optimize the SQL 2000 to keep as much as possible
> in-memory so searching will as quick as possible? The read / write rate is
> approx 1000/1 (we read 1000 x as often as writing)
> Is the some memory settings we can tweak or are we better of letting sql
> server 2000 handle it?
> Thanks in regards
> Anders Jacobsen
>|||"Anders" <anderskj1@.yahoo.dk> wrote in message
news:%23j7EngYXGHA.4432@.TK2MSFTNGP04.phx.gbl...
> Hi
> We will within the next week move our SQL 2000 server SP4 to a new high
> performance server (2 x ghz XEON, 4 GB RAM)
> The database which is to be running on the server has a physical size of
50
> mb on disk.
> Is there a way to optimize the SQL 2000 to keep as much as possible
> in-memory so searching will as quick as possible? The read / write rate is
> approx 1000/1 (we read 1000 x as often as writing)
> Is the some memory settings we can tweak or are we better of letting sql
> server 2000 handle it?
Generally you're better off letting SQL Server handle it. It will grab as
much RAM as it can (2GB with Standard, more with Enterprise with the proper
switches) and use it to cache.
Check perfmon and look at the cache hit ratio for one thing to see how
you're doing.
If it's truly 50MB, that'll fit into RAM absolutely w/o problems.

> Thanks in regards
> Anders Jacobsen
>|||> Generally you're better off letting SQL Server handle it. It will grab as
> much RAM as it can (2GB with Standard, more with Enterprise with the
> proper
> switches) and use it to cache.
> Check perfmon and look at the cache hit ratio for one thing to see how
> you're doing.
> If it's truly 50MB, that'll fit into RAM absolutely w/o problems.
Sounds great.
Thanks

Optimizing SQL - Union

Hello all,

I have a table with thousands of rows and is in this format:

id col1 col2 col3 col4
-- -- -- -- --
1 nm 78 xyz pir
2 bn 45 abc dir

I now want to get the data from this table in this format:

field val
--------
col1 nm
col1 bn
col2 78
col2 45
col3 xyz
col3 abc
col4 pir
col4 dir

In order to do this I am doing a union:

select * into #tempUpdate
(
select 'col1' as field, col1 as val from table1
union
select 'col2' as field, col2 as val from table1
union
select 'col3' as field, col3 as val from table1
)

the above example query is smaller - I have a much bigger table with
about 80 columns (Imagine the size of my union query :) and this takes
a lot of time to execute. Can someone please suggest a better way to do
this?

The results of this union query are selected into a temp table, which I
then use to update another table. I am using SQL Server 2000.

my main concern is performance. any ideas please?

thanksIf you have SQL 2005, you can use UNPIVOT. If you are using earlier
releases, try:

select
m.id
, x.col
, case x.col
when 1 then m.col1
when 2 then m.col2
when 3 then m.col3
when 4 then m.col4
end as val
from
MyTable m
cross join
(
select 'col1' union all
select 'col1' union all
select 'col1' union all
select 'col4'
) as x (col)

--
Tom

----------------
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
..
"das" <Adityanad@.gmail.com> wrote in message
news:1143651738.146958.160520@.t31g2000cwb.googlegr oups.com...
Hello all,

I have a table with thousands of rows and is in this format:

id col1 col2 col3 col4
-- -- -- -- --
1 nm 78 xyz pir
2 bn 45 abc dir

I now want to get the data from this table in this format:

field val
--------
col1 nm
col1 bn
col2 78
col2 45
col3 xyz
col3 abc
col4 pir
col4 dir

In order to do this I am doing a union:

select * into #tempUpdate
(
select 'col1' as field, col1 as val from table1
union
select 'col2' as field, col2 as val from table1
union
select 'col3' as field, col3 as val from table1
)

the above example query is smaller - I have a much bigger table with
about 80 columns (Imagine the size of my union query :) and this takes
a lot of time to execute. Can someone please suggest a better way to do
this?

The results of this union query are selected into a temp table, which I
then use to update another table. I am using SQL Server 2000.

my main concern is performance. any ideas please?

thanks|||Ok, I will try this. I am a liitle bit confused about the cross join..

I will test and let you know. Thanks!|||On 29 Mar 2006 09:02:18 -0800, das wrote:

(snip)
>In order to do this I am doing a union:
>select * into #tempUpdate
>(
> select 'col1' as field, col1 as val from table1
> union
> select 'col2' as field, col2 as val from table1
> union
> select 'col3' as field, col3 as val from table1
>)
>the above example query is smaller - I have a much bigger table with
>about 80 columns (Imagine the size of my union query :) and this takes
>a lot of time to execute. Can someone please suggest a better way to do
>this?

Hi das,

Somewhat simpler than the suggestions Tom posted (and probably less
efficient, but still a major win over your present version) is the
following simple change:

select 'col1' as field, col1 as val from table1
union ALL
select 'col2' as field, col2 as val from table1
union ALL
select 'col3' as field, col3 as val from table1

UNION without ALL will attempt to remove duplicates; with large result
sets, checking for duplicates can be a major performance killer. With
UNION ALL, you say "don't attempt to remove duplicates" - either because
you want them or (in this case) because you're sure there aren't any.

--
Hugo Kornelis, SQL Server MVP|||that's a really good advice, didn't know what 'union all' meant all
these days.
I tried Thomas approach and it is much faster than before.
thanks a lot guys.|||Been away for a while. Here's a correction:

select
m.id
, x.col
, case x.col
when 1 then m.col1
when 2 then m.col2
when 3 then m.col3
when 4 then m.col4
end as val
from
MyTable m
cross join
(
select 'col1' union all
select 'col2' union all
select 'col3' union all
select 'col4'
) as x (col)

--
Tom

----------------
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com

"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:8izWf.1879$m35.157124@.news20.bellglobal.com.. .
If you have SQL 2005, you can use UNPIVOT. If you are using earlier
releases, try:

select
m.id
, x.col
, case x.col
when 1 then m.col1
when 2 then m.col2
when 3 then m.col3
when 4 then m.col4
end as val
from
MyTable m
cross join
(
select 'col1' union all
select 'col1' union all
select 'col1' union all
select 'col4'
) as x (col)

--
Tom

----------------
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
..
"das" <Adityanad@.gmail.com> wrote in message
news:1143651738.146958.160520@.t31g2000cwb.googlegr oups.com...
Hello all,

I have a table with thousands of rows and is in this format:

id col1 col2 col3 col4
-- -- -- -- --
1 nm 78 xyz pir
2 bn 45 abc dir

I now want to get the data from this table in this format:

field val
--------
col1 nm
col1 bn
col2 78
col2 45
col3 xyz
col3 abc
col4 pir
col4 dir

In order to do this I am doing a union:

select * into #tempUpdate
(
select 'col1' as field, col1 as val from table1
union
select 'col2' as field, col2 as val from table1
union
select 'col3' as field, col3 as val from table1
)

the above example query is smaller - I have a much bigger table with
about 80 columns (Imagine the size of my union query :) and this takes
a lot of time to execute. Can someone please suggest a better way to do
this?

The results of this union query are selected into a temp table, which I
then use to update another table. I am using SQL Server 2000.

my main concern is performance. any ideas please?

thanks|||THANKS THOMAS!sql

Optimizing SELECT query

Hi

I have one table (tableDemo) with following structure:

TSID bigint (Primary Key)
TskID bigint
Sequence bigint
Version bigint
Frequency varchar(500)
WOID bigint
DateSchedule datetime
TimeStandard real
MeterEstimated real
MeterLast real
Description ntext
CreatedBy bigint
CreatedDate datetime
ModifiedBy bigint
ModifiedDate datetime
Id uniqueidentifier

I have 8000 records in this table. When I fire simple select command (i.e. select * from tableDemo) it takes more than 120 seconds.

Is there any techniques where I can access all these records within 2-3 seconds ?

Regards,

ND

Why do you need every field of every record? Surely you are not going to display them all on one page? If you want to use them for paging, and are using Sql Server 2005, have a look at ROW_NUMBER:

http://www.davidhayden.com/blog/dave/archive/2005/12/30/2652.aspx
http://msdn2.microsoft.com/en-us/library/ms186734.aspx

|||

Are there any large files stored in any of the rows being returned? i.e. files or larges amounts of text? Returning only 8000 rows shouldnt take very long esp not 120 seconds. Is the instance of sql on your local machine or located someone else?

Tim

optimizing resource allocation

Hi All,
Some of the queries we are running are resource intensive which is impacting
other users running other processes on the same SQL server. Is there a way
to limit the time allocated to a query so it pauses after x seconds in order
to allow other processes to run? Please note that the query should not be
permanently interrupted but temporarily halted.
I have checked the "query governor" option but it seems that this parameter
would disallow running the query if it exceeds the cost limit.
Regards
Hi
Perhaps it's time to start to optimize your queries.Compare an execution
plan , make sure that the optimizer is available to use indexes defined on
the tables.
"imad koussa" <imad.koussa@.aims-co.com> wrote in message
news:uu6kex0FGHA.2444@.TK2MSFTNGP11.phx.gbl...
> Hi All,
> Some of the queries we are running are resource intensive which is
> impacting other users running other processes on the same SQL server. Is
> there a way to limit the time allocated to a query so it pauses after x
> seconds in order to allow other processes to run? Please note that the
> query should not be permanently interrupted but temporarily halted.
> I have checked the "query governor" option but it seems that this
> parameter would disallow running the query if it exceeds the cost limit.
> Regards
>
|||Basically no there is no easy or practical way to do that. But if the
resources are CPU you should consider using a MAXDOP hint in those queries
to limit the number of CPU's they can use in parallel.
Andrew J. Kelly SQL MVP
"imad koussa" <imad.koussa@.aims-co.com> wrote in message
news:uu6kex0FGHA.2444@.TK2MSFTNGP11.phx.gbl...
> Hi All,
> Some of the queries we are running are resource intensive which is
> impacting other users running other processes on the same SQL server. Is
> there a way to limit the time allocated to a query so it pauses after x
> seconds in order to allow other processes to run? Please note that the
> query should not be permanently interrupted but temporarily halted.
> I have checked the "query governor" option but it seems that this
> parameter would disallow running the query if it exceeds the cost limit.
> Regards
>

optimizing resource allocation

Hi All,
Some of the queries we are running are resource intensive which is impacting
other users running other processes on the same SQL server. Is there a way
to limit the time allocated to a query so it pauses after x seconds in order
to allow other processes to run? Please note that the query should not be
permanently interrupted but temporarily halted.
I have checked the "query governor" option but it seems that this parameter
would disallow running the query if it exceeds the cost limit.
RegardsHi
Perhaps it's time to start to optimize your queries.Compare an execution
plan , make sure that the optimizer is available to use indexes defined on
the tables.
"imad koussa" <imad.koussa@.aims-co.com> wrote in message
news:uu6kex0FGHA.2444@.TK2MSFTNGP11.phx.gbl...
> Hi All,
> Some of the queries we are running are resource intensive which is
> impacting other users running other processes on the same SQL server. Is
> there a way to limit the time allocated to a query so it pauses after x
> seconds in order to allow other processes to run? Please note that the
> query should not be permanently interrupted but temporarily halted.
> I have checked the "query governor" option but it seems that this
> parameter would disallow running the query if it exceeds the cost limit.
> Regards
>|||Basically no there is no easy or practical way to do that. But if the
resources are CPU you should consider using a MAXDOP hint in those queries
to limit the number of CPU's they can use in parallel.
Andrew J. Kelly SQL MVP
"imad koussa" <imad.koussa@.aims-co.com> wrote in message
news:uu6kex0FGHA.2444@.TK2MSFTNGP11.phx.gbl...
> Hi All,
> Some of the queries we are running are resource intensive which is
> impacting other users running other processes on the same SQL server. Is
> there a way to limit the time allocated to a query so it pauses after x
> seconds in order to allow other processes to run? Please note that the
> query should not be permanently interrupted but temporarily halted.
> I have checked the "query governor" option but it seems that this
> parameter would disallow running the query if it exceeds the cost limit.
> Regards
>

optimizing resource allocation

Hi All,
Some of the queries we are running are resource intensive which is impacting
other users running other processes on the same SQL server. Is there a way
to limit the time allocated to a query so it pauses after x seconds in order
to allow other processes to run? Please note that the query should not be
permanently interrupted but temporarily halted.
I have checked the "query governor" option but it seems that this parameter
would disallow running the query if it exceeds the cost limit.
RegardsHi
Perhaps it's time to start to optimize your queries.Compare an execution
plan , make sure that the optimizer is available to use indexes defined on
the tables.
"imad koussa" <imad.koussa@.aims-co.com> wrote in message
news:uu6kex0FGHA.2444@.TK2MSFTNGP11.phx.gbl...
> Hi All,
> Some of the queries we are running are resource intensive which is
> impacting other users running other processes on the same SQL server. Is
> there a way to limit the time allocated to a query so it pauses after x
> seconds in order to allow other processes to run? Please note that the
> query should not be permanently interrupted but temporarily halted.
> I have checked the "query governor" option but it seems that this
> parameter would disallow running the query if it exceeds the cost limit.
> Regards
>|||Basically no there is no easy or practical way to do that. But if the
resources are CPU you should consider using a MAXDOP hint in those queries
to limit the number of CPU's they can use in parallel.
--
Andrew J. Kelly SQL MVP
"imad koussa" <imad.koussa@.aims-co.com> wrote in message
news:uu6kex0FGHA.2444@.TK2MSFTNGP11.phx.gbl...
> Hi All,
> Some of the queries we are running are resource intensive which is
> impacting other users running other processes on the same SQL server. Is
> there a way to limit the time allocated to a query so it pauses after x
> seconds in order to allow other processes to run? Please note that the
> query should not be permanently interrupted but temporarily halted.
> I have checked the "query governor" option but it seems that this
> parameter would disallow running the query if it exceeds the cost limit.
> Regards
>

Optimizing Reports

I have combined three reports into one big report. I would like to someway cache the big report, and then create little reports from the cached report. What would be the best way to go about doing this?

RS currently can't do that - the data results from one query are not available as data sources to another query.

What you can do is:

1) use an execution snapshot for the report (this way the report only runs when you want it to )

2) use a non-query report parameter and some visibility (show/hide) logic to show the parts of the report you want to see at a given time.

Not a perfect solution, but it will liekly solve scenario.

-Lukasz

|||I had a parameter in my report. Do I need to remove it, and add the non-query report parameter?|||

The way snapshots work is that once you create a snapshot, you cannot varry any parameters that are used in a query . Other report parameters that are not used in a query can be varried. Not sure which case you fall into.

-Lukasz

sql

Optimizing Reports

I've got a report thats somewhat time consuming that runs on my
reporting server, and what I've found is that if I select anything more
than a few months in my daterange parameters, its like 5 minutes to run.
What I'd like to know...
Can I snapshot a years worth of default data, and have the report run
off the snapshot, but allow you to specify date range within the two dates?
Thanks in advance
WestonWeston Weems wrote:
> I've got a report thats somewhat time consuming that runs on my
> reporting server, and what I've found is that if I select anything more
> than a few months in my daterange parameters, its like 5 minutes to run.
> What I'd like to know...
> Can I snapshot a years worth of default data, and have the report run
> off the snapshot, but allow you to specify date range within the two dates?
> Thanks in advance
> Weston
I think a linked report with the default parameter of a years worth of
data setup in a snapshot would be good. Then just reference the linked
report and change the parameters. The report should be served from the
snapshot.
Just theory, I haven't tried it to see if it would work.

Optimizing query with UDF and table vars and IN

Hi, I am trying to optimize this scenario.
I have a query that is returning a list of services. Each service is done
by an employee and for a client. Employees have rights to see only certain
sevices. They can see any service that is done by an employee they have
rights to OR done for a client they have rights to.
CREATE TABLE [Service] (
service_id Int IDENTITY(1,1) NOT NULL,
emp_id Int,
client_id Int)
I have 2 UDFs that return a list of emp_id's they have rights to and a list
of client_id's they have rights to respectively.
CREATE FUNCTION dbo.f_list_emps (@.my_emp_id Int)
RETURNS @.EmpList TABLE (emp_id int not null unique)
AS
BEGIN
// Fill @.EmpList here with a bunch of queries
END
CREATE FUNCTION dbo.f_list_clients (@.my_emp_id Int)
RETURNS @.ClientList TABLE (client_id int not null unique)
AS
BEGIN
// Fill @.ClientList here with a bunch of queries
END
The actual query is built dynamically because it can have about 15 different
parameters passed to it, but a simplified version would look like:
SELECT * FROM Service
WHERE emp_id IN
(SELECT emp_id FROM dbo.f_list_emps(@.my_emp_id))
OR client_id IN
(SELECT client_id FROM dbo.f_list_clients(@.my_emp_id))
I'm looking for a way to optimize this a bit better. I can't join the table
vars directly because of the 'OR', and I don't want to do a UNION of 2
queries each with a separate join because of all the other parameters
involved in the query.
Thanks for any advice,
DaveDavid D Webb (spivey@.nospam.post.com) writes:
> The actual query is built dynamically because it can have about 15
> different parameters passed to it, but a simplified version would look
> like: >
> SELECT * FROM Service
> WHERE emp_id IN
> (SELECT emp_id FROM dbo.f_list_emps(@.my_emp_id))
> OR client_id IN
> (SELECT client_id FROM dbo.f_list_clients(@.my_emp_id))
> I'm looking for a way to optimize this a bit better. I can't join the
> table vars directly because of the 'OR', and I don't want to do a UNION
> of 2 queries each with a separate join because of all the other
> parameters involved in the query.
It's of course impossible to suggest optimizations when I don't see
the tables, and do not the full query.
What I would consider is to insert the data from the table functions
into temp tables. Temp tables have statistics, and since you are running
a dynamic query anyway, you could just as well make use of that statistics.
If you use the UDFs in the query, SQL Server will make some standard
assumptions about what they return.
I would also consider running a UNION of two queries. If you are building
the query dynamically, it should not be much of an issue to repeat the
queries. But you should benchmark whether UNION actually gives an
improvement.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||A simplified query probably won't do.
But here a tip that might be useful: drop the UDF's. If you can rewrite
them as views, then the optimizer can properly optimize the query. When
using UDF's in this fashion you are bound to run into performance
problems as the resultset grows.
Gert-Jan
David D Webb wrote:
> Hi, I am trying to optimize this scenario.
> I have a query that is returning a list of services. Each service is done
> by an employee and for a client. Employees have rights to see only certai
n
> sevices. They can see any service that is done by an employee they have
> rights to OR done for a client they have rights to.
> CREATE TABLE [Service] (
> service_id Int IDENTITY(1,1) NOT NULL,
> emp_id Int,
> client_id Int)
> I have 2 UDFs that return a list of emp_id's they have rights to and a lis
t
> of client_id's they have rights to respectively.
> CREATE FUNCTION dbo.f_list_emps (@.my_emp_id Int)
> RETURNS @.EmpList TABLE (emp_id int not null unique)
> AS
> BEGIN
> // Fill @.EmpList here with a bunch of queries
> END
> CREATE FUNCTION dbo.f_list_clients (@.my_emp_id Int)
> RETURNS @.ClientList TABLE (client_id int not null unique)
> AS
> BEGIN
> // Fill @.ClientList here with a bunch of queries
> END
> The actual query is built dynamically because it can have about 15 differe
nt
> parameters passed to it, but a simplified version would look like:
> SELECT * FROM Service
> WHERE emp_id IN
> (SELECT emp_id FROM dbo.f_list_emps(@.my_emp_id))
> OR client_id IN
> (SELECT client_id FROM dbo.f_list_clients(@.my_emp_id))
> I'm looking for a way to optimize this a bit better. I can't join the tab
le
> vars directly because of the 'OR', and I don't want to do a UNION of 2
> queries each with a separate join because of all the other parameters
> involved in the query.
> Thanks for any advice,
> Dave