Friday, March 30, 2012
how to truncate log
I use SQL 2000 and the log of a database is very big(I have full recovery).
I tried backing it up with Veritas BackupExec in order to truncate it, did
not work.
What should I do?
Thanks a lot, CalinRun DBCC OPENTRAN ('yourDB') to see whether you have an
active transaction that prevents the log from being
truncated.
Linchi
>--Original Message--
>Hi there,
>I use SQL 2000 and the log of a database is very big(I
have full recovery).
>I tried backing it up with Veritas BackupExec in order to
truncate it, did
>not work.
>What should I do?
>Thanks a lot, Calin
>.
>|||There is a difference between truncating the log and shrinking the file
size. A normal log backup will truncate the log, freeing internal space,
but the file size will remain the same. You'll need to use DBCC SHRINKFILE
to reduce the transaction log file size.
Jim
"Tester" <CalinGuga@.netscape.net> wrote in message
news:%23E8xJUkzDHA.2412@.TK2MSFTNGP10.phx.gbl...
> Hi there,
> I use SQL 2000 and the log of a database is very big(I have full
recovery).
> I tried backing it up with Veritas BackupExec in order to truncate it, did
> not work.
> What should I do?
> Thanks a lot, Calin
>|||I don't know about Veritas BackupExec but I do have some
knowledge of SQLServer backup. When you do a full DB
backup using SQLServer backup, the transactions which are
in the log file and which have been marked as commited
are deleted from the log file. Then you can run DBCC
SHRINKFILE and it will work.
If you have Auto Shrink checked on the database
properties, then the SQLServer will shrink the size of
the log file over time after a full DB backup. This is
the advantage of using a maintenance plan backup in
SQLServer.
If you are in a super hurry to shrink the log file, do
the following:
1. Make sure you are not publishing using Transactional
replication.
2. Make sure there are no open transactions by running0
DBCC OPENTRAN.
3. Make sure there are no remote server links to this
database.
4. Make sure you know what maintenance plans are using
this database because you will probably have to rebuild
them.
5. Detach the database.
6. Manually delete the log file in Windows Explorer.
7. Attach the database which will build a new, 504 KB,
log file.
8. Rebuild your mainenance plans.
>--Original Message--
>Hi there,
>I use SQL 2000 and the log of a database is very big(I
have full recovery).
>I tried backing it up with Veritas BackupExec in order
to truncate it, did
>not work.
>What should I do?
>Thanks a lot, Calin
>.
>
How to truncate DB logs in SQL 2000.
Thanks.Check out below KB articles:
INF: How to Shrink the SQL Server 7.0 Transaction Log
http://support.microsoft.com/default.aspx?scid=kb;en-us;256650
INF: Shrinking the Transaction Log in SQL Server 2000 with DBCC SHRINKFILE
http://support.microsoft.com/default.aspx?scid=kb;en-us;272318
Log File Grows too big
http://www.support.microsoft.com/?id=317375
Log file filling up
http://www.support.microsoft.com/?id=110139
Considerations for Autogrow and AutoShrink
http://www.support.microsoft.com/?id=315512
http://www.mssqlserver.com/faq/logs-shrinklog.asp
Tibor Karaszi, SQL Server MVP
Archive at: http://groups.google.com/groups?oi=djq&as_ugroup=microsoft.public.sqlserver
"HUBCO" <anonymous@.discussions.microsoft.com> wrote in message
news:07E35360-B62D-43FC-8619-97CBCF4546EC@.microsoft.com...
> Can someone tell me how to truncate the DB logs in SQL 2000. I have a server with 35 GB of
transaction log.
> Thanks.|||http://www.aspfaq.com/2471
--
Aaron Bertrand
SQL Server MVP
http://www.aspfaq.com/
"HUBCO" <anonymous@.discussions.microsoft.com> wrote in message
news:07E35360-B62D-43FC-8619-97CBCF4546EC@.microsoft.com...
> Can someone tell me how to truncate the DB logs in SQL 2000. I have a
server with 35 GB of transaction log.
> Thanks.|||The simplest and easiest way is to go into your databases
properties and set the recovery to 'simple' then apply
the changes. This will allow sql to automatically
truncate the tran log at the next checkpoint run.
Once that is complete(you will need to monitor the db to
find out when then occurs but it should happen fast) you
can go back in and change the recovery mode back to full
or whatever you had it set at before you started.
>--Original Message--
>Can someone tell me how to truncate the DB logs in SQL
2000. I have a server with 35 GB of transaction log.
>Thanks.
>.
>
How to truncate complete database?
Hello,
I want to truncate all tables present in the particular database, Is there any simple way to do it? or do I have to do it on individula basis (table by table)?
regards
When you say truncate, do you just want to remove all records from all tables? The easiest way would be to script out the tables with drop and create statements. To actually truncate the tables, you could write a cursor to loop through the table names and to run dynamic sql to truncate each table.
I hope this helps.
|||Zadoras wrote: Hello,
I want to truncate all tables present in the particular database, Is there any simple way to do it? or do I have to do it on individula basis (table by table)?
regards
create a script to drop all your FK constraints
create a script to create all your FK constraints
To do this, right click on the database and "Generate Scripts.."
Then
1 - run your script to drop all Fks
2 - use the sp_MSForEachTable stored procedure to truncate all tables
EXEC sp_MSForEachTable 'Truncate Table ?'
3 - run your script to add all the FKs back|||thanx very much for your support
How to truncate a transaction log using Simple Recovery Model?
t
need the transaction log. Is there a TSQL command to truncate the log? I a
m
using a Simple Recovery Model, but it is still keeping a transaction log.
Do I have to force a check point programatically?
Any suggestions would be appreciated.
Thanks in advance,
WayneMHi
Simple Recovery Mode forces the log to be truncated at every checkpoint.
Checkpoint runs by default every minute. Only the inactive transactions can
be truncated.
But, If you have a transaction that affects 100'000 records, until the
transaction is complete (committed or rolled back), those records can not be
truncated. The log has to store them all. So, keep your transactions short.
How big is your DB, your transaction log?
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"WayneM" <WayneM@.discussions.microsoft.com> wrote in message
news:31259267-F903-4773-8842-5BF2EFA896A4@.microsoft.com...
> I have a SQL Server database on a server with limited disk space and I do
not
> need the transaction log. Is there a TSQL command to truncate the log? I
am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM|||Just because you don't need the tran log doesn't mean SQL Server doesn't<g>.
The tran log is crucial to proper operation. I am not sure what you are
asking for here but I think you are misusing the term truncation. I suspect
what you really what is to shrink the log. A truncation does not do that and
in Simple mode it will truncate all on it's own when it gets about 70% full.
If you want to shrink the size of the log then use DBCC SHRINKFILE. Just
make sure it is large enough to hold the largest transaction you will do or
it will just grow again.
Andrew J. Kelly SQL MVP
"WayneM" <WayneM@.discussions.microsoft.com> wrote in message
news:31259267-F903-4773-8842-5BF2EFA896A4@.microsoft.com...
>I have a SQL Server database on a server with limited disk space and I do
>not
> need the transaction log. Is there a TSQL command to truncate the log? I
> am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM|||Mike,
My db is 1.4GB and the log is 2.0 GB. This in itself is not big, but there
are several databases in the server. Each DB has files that are parsed and
loaded into the database, so the concept of transactions does not apply here
.
We just keep the files and re-parse and load them if necessary.
Thanks,
Wayne
"Mike Epprecht (SQL MVP)" wrote:
> Hi
> Simple Recovery Mode forces the log to be truncated at every checkpoint.
> Checkpoint runs by default every minute. Only the inactive transactions ca
n
> be truncated.
> But, If you have a transaction that affects 100'000 records, until the
> transaction is complete (committed or rolled back), those records can not
be
> truncated. The log has to store them all. So, keep your transactions short
.
> How big is your DB, your transaction log?
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> IM: mike@.epprecht.net
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
> "WayneM" <WayneM@.discussions.microsoft.com> wrote in message
> news:31259267-F903-4773-8842-5BF2EFA896A4@.microsoft.com...
> not
> am
>
>|||> so the concept of transactions does not apply here.
You don't seem to understand how Sql Server works as compare to some smaller
database engines. Everything is always in a transaction in Sql Server. It
may be explicit in which you manually issue a Begin Tran and a commit Tran
to wrap two or more statements to keep them Atomic. But even if you don't
each statement is Atomic by itself and is wrapped in a transaction behind
the scenes. If you bulk load 100K rows they will all be in a single
transaction whether you want them to be or not. This activity is always
logged in one form or another. If you do a minimally logged load the
extents that are affected are logged. Otherwise it is fully logged. You
need to meet the requirements listed in BooksOnLine under "minimally logged
bulk copy" to do this though. If you really do just load the files and the
db is replaceable then you should ensure your loads meet these conditions.
That will speed up the load and and allow you to keep the log file smaller.
Andrew J. Kelly SQL MVP
"WayneM" <WayneM@.discussions.microsoft.com> wrote in message
news:AAF9E36F-4784-43B1-B662-E9332884BB41@.microsoft.com...[vbcol=seagreen]
> Mike,
> My db is 1.4GB and the log is 2.0 GB. This in itself is not big, but
> there
> are several databases in the server. Each DB has files that are parsed
> and
> loaded into the database, so the concept of transactions does not apply
> here.
> We just keep the files and re-parse and load them if necessary.
> Thanks,
> Wayne
> "Mike Epprecht (SQL MVP)" wrote:
>|||If you just want to get rid of the log. Then (after you have backed up the
DB) 'Detach Database' and then go the .LDF file, via explorer, usually in th
e
Program File\MSSQLServer|Data path, and delete it.
Then go back to Enterprise Manger and right clidk on Databases and attach
the DB, it will tell you it can't find the .LDF and will creae one of 1MB.
I am assuming that as you are using Simple Recovery, it either is not a
Production DB, or it only contains static data.
--
Cheers
Coburndavis
"WayneM" wrote:
> I have a SQL Server database on a server with limited disk space and I do
not
> need the transaction log. Is there a TSQL command to truncate the log? I
am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM|||Try backing up the DB, then use Enterprise Manager (EM) to 'Detach Dabtabase
'
go to the .LDF (ie the log file) via explorer and delete it.
Go back to EM right click on Databases and 'Attach DB (point it to the .MDF
for the DB) it will tell you there is no .LDF, accept this and it will creat
e
one of 1MB.
I am assuming as your recovery mode is Simple it is not a production DB, or
that the data is static
--
Cheers
Coburndavis
"WayneM" wrote:
> I have a SQL Server database on a server with limited disk space and I do
not
> need the transaction log. Is there a TSQL command to truncate the log? I
am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM|||Try backing up the DB, then use Enterprise Manager (EM) to 'Detach Dabtabase
'
go to the .LDF (ie the log file) via explorer and delete it.
Go back to EM right click on Databases and 'Attach DB (point it to the .MDF
for the DB) it will tell you there is no .LDF, accept this and it will creat
e
one of 1MB.
I am assuming, as your recovery mode is Simple, it is not a production DB,
or that the data is static
Cheers
Coburndavis
"WayneM" wrote:
> I have a SQL Server database on a server with limited disk space and I do
not
> need the transaction log. Is there a TSQL command to truncate the log? I
am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM|||Try backing up the DB, then use Enterprise Manager (EM) to 'Detach Dabtabase
'
go to the .LDF (ie the log file) via explorer and delete it.
Go back to EM right click on Databases and 'Attach DB (point it to the .MDF
for the DB) it will tell you there is no .LDF, accept this and it will creat
e
one of 1MB.
I am assuming, as your recovery mode is Simple, it is not a production DB,
or that the data is static
Cheers
Coburndavis
"WayneM" wrote:
> I have a SQL Server database on a server with limited disk space and I do
not
> need the transaction log. Is there a TSQL command to truncate the log? I
am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM|||Try backing up the DB, then use Enterprise Manager (EM) to 'Detach Dabtabase
'
go to the .LDF (ie the log file) via explorer and delete it.
Go back to EM right click on Databases and 'Attach DB (point it to the .MDF
for the DB) it will tell you there is no .LDF, accept this and it will creat
e
one of 1MB.
I am assuming, as your recovery mode is Simple, it is not a production DB,
or that the data is static
Cheers
Coburndavis
"WayneM" wrote:
> I have a SQL Server database on a server with limited disk space and I do
not
> need the transaction log. Is there a TSQL command to truncate the log? I
am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneMsql
How to truncate a transaction log using Simple Recovery Model?
need the transaction log. Is there a TSQL command to truncate the log? I am
using a Simple Recovery Model, but it is still keeping a transaction log.
Do I have to force a check point programatically?
Any suggestions would be appreciated.
Thanks in advance,
WayneM
Hi
Simple Recovery Mode forces the log to be truncated at every checkpoint.
Checkpoint runs by default every minute. Only the inactive transactions can
be truncated.
But, If you have a transaction that affects 100'000 records, until the
transaction is complete (committed or rolled back), those records can not be
truncated. The log has to store them all. So, keep your transactions short.
How big is your DB, your transaction log?
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"WayneM" <WayneM@.discussions.microsoft.com> wrote in message
news:31259267-F903-4773-8842-5BF2EFA896A4@.microsoft.com...
> I have a SQL Server database on a server with limited disk space and I do
not
> need the transaction log. Is there a TSQL command to truncate the log? I
am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM
|||Just because you don't need the tran log doesn't mean SQL Server doesn't<g>.
The tran log is crucial to proper operation. I am not sure what you are
asking for here but I think you are misusing the term truncation. I suspect
what you really what is to shrink the log. A truncation does not do that and
in Simple mode it will truncate all on it's own when it gets about 70% full.
If you want to shrink the size of the log then use DBCC SHRINKFILE. Just
make sure it is large enough to hold the largest transaction you will do or
it will just grow again.
Andrew J. Kelly SQL MVP
"WayneM" <WayneM@.discussions.microsoft.com> wrote in message
news:31259267-F903-4773-8842-5BF2EFA896A4@.microsoft.com...
>I have a SQL Server database on a server with limited disk space and I do
>not
> need the transaction log. Is there a TSQL command to truncate the log? I
> am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM
|||Mike,
My db is 1.4GB and the log is 2.0 GB. This in itself is not big, but there
are several databases in the server. Each DB has files that are parsed and
loaded into the database, so the concept of transactions does not apply here.
We just keep the files and re-parse and load them if necessary.
Thanks,
Wayne
"Mike Epprecht (SQL MVP)" wrote:
> Hi
> Simple Recovery Mode forces the log to be truncated at every checkpoint.
> Checkpoint runs by default every minute. Only the inactive transactions can
> be truncated.
> But, If you have a transaction that affects 100'000 records, until the
> transaction is complete (committed or rolled back), those records can not be
> truncated. The log has to store them all. So, keep your transactions short.
> How big is your DB, your transaction log?
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> IM: mike@.epprecht.net
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
> "WayneM" <WayneM@.discussions.microsoft.com> wrote in message
> news:31259267-F903-4773-8842-5BF2EFA896A4@.microsoft.com...
> not
> am
>
>
|||> so the concept of transactions does not apply here.
You don't seem to understand how Sql Server works as compare to some smaller
database engines. Everything is always in a transaction in Sql Server. It
may be explicit in which you manually issue a Begin Tran and a commit Tran
to wrap two or more statements to keep them Atomic. But even if you don't
each statement is Atomic by itself and is wrapped in a transaction behind
the scenes. If you bulk load 100K rows they will all be in a single
transaction whether you want them to be or not. This activity is always
logged in one form or another. If you do a minimally logged load the
extents that are affected are logged. Otherwise it is fully logged. You
need to meet the requirements listed in BooksOnLine under "minimally logged
bulk copy" to do this though. If you really do just load the files and the
db is replaceable then you should ensure your loads meet these conditions.
That will speed up the load and and allow you to keep the log file smaller.
Andrew J. Kelly SQL MVP
"WayneM" <WayneM@.discussions.microsoft.com> wrote in message
news:AAF9E36F-4784-43B1-B662-E9332884BB41@.microsoft.com...[vbcol=seagreen]
> Mike,
> My db is 1.4GB and the log is 2.0 GB. This in itself is not big, but
> there
> are several databases in the server. Each DB has files that are parsed
> and
> loaded into the database, so the concept of transactions does not apply
> here.
> We just keep the files and re-parse and load them if necessary.
> Thanks,
> Wayne
> "Mike Epprecht (SQL MVP)" wrote:
|||If you just want to get rid of the log. Then (after you have backed up the
DB) 'Detach Database' and then go the .LDF file, via explorer, usually in the
Program File\MSSQLServer|Data path, and delete it.
Then go back to Enterprise Manger and right clidk on Databases and attach
the DB, it will tell you it can't find the .LDF and will creae one of 1MB.
I am assuming that as you are using Simple Recovery, it either is not a
Production DB, or it only contains static data.
Cheers
Coburndavis
"WayneM" wrote:
> I have a SQL Server database on a server with limited disk space and I do not
> need the transaction log. Is there a TSQL command to truncate the log? I am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM
|||Try backing up the DB, then use Enterprise Manager (EM) to 'Detach Dabtabase'
go to the .LDF (ie the log file) via explorer and delete it.
Go back to EM right click on Databases and 'Attach DB (point it to the .MDF
for the DB) it will tell you there is no .LDF, accept this and it will create
one of 1MB.
I am assuming as your recovery mode is Simple it is not a production DB, or
that the data is static
Cheers
Coburndavis
"WayneM" wrote:
> I have a SQL Server database on a server with limited disk space and I do not
> need the transaction log. Is there a TSQL command to truncate the log? I am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM
|||Try backing up the DB, then use Enterprise Manager (EM) to 'Detach Dabtabase'
go to the .LDF (ie the log file) via explorer and delete it.
Go back to EM right click on Databases and 'Attach DB (point it to the .MDF
for the DB) it will tell you there is no .LDF, accept this and it will create
one of 1MB.
I am assuming, as your recovery mode is Simple, it is not a production DB,
or that the data is static
Cheers
Coburndavis
"WayneM" wrote:
> I have a SQL Server database on a server with limited disk space and I do not
> need the transaction log. Is there a TSQL command to truncate the log? I am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM
|||Try backing up the DB, then use Enterprise Manager (EM) to 'Detach Dabtabase'
go to the .LDF (ie the log file) via explorer and delete it.
Go back to EM right click on Databases and 'Attach DB (point it to the .MDF
for the DB) it will tell you there is no .LDF, accept this and it will create
one of 1MB.
I am assuming, as your recovery mode is Simple, it is not a production DB,
or that the data is static
Cheers
Coburndavis
"WayneM" wrote:
> I have a SQL Server database on a server with limited disk space and I do not
> need the transaction log. Is there a TSQL command to truncate the log? I am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM
|||Try backing up the DB, then use Enterprise Manager (EM) to 'Detach Dabtabase'
go to the .LDF (ie the log file) via explorer and delete it.
Go back to EM right click on Databases and 'Attach DB (point it to the .MDF
for the DB) it will tell you there is no .LDF, accept this and it will create
one of 1MB.
I am assuming, as your recovery mode is Simple, it is not a production DB,
or that the data is static
Cheers
Coburndavis
"WayneM" wrote:
> I have a SQL Server database on a server with limited disk space and I do not
> need the transaction log. Is there a TSQL command to truncate the log? I am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM
How to truncate a transaction log using Simple Recovery Model?
need the transaction log. Is there a TSQL command to truncate the log? I am
using a Simple Recovery Model, but it is still keeping a transaction log.
Do I have to force a check point programatically?
Any suggestions would be appreciated.
Thanks in advance,
WayneMHi
Simple Recovery Mode forces the log to be truncated at every checkpoint.
Checkpoint runs by default every minute. Only the inactive transactions can
be truncated.
But, If you have a transaction that affects 100'000 records, until the
transaction is complete (committed or rolled back), those records can not be
truncated. The log has to store them all. So, keep your transactions short.
How big is your DB, your transaction log?
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"WayneM" <WayneM@.discussions.microsoft.com> wrote in message
news:31259267-F903-4773-8842-5BF2EFA896A4@.microsoft.com...
> I have a SQL Server database on a server with limited disk space and I do
not
> need the transaction log. Is there a TSQL command to truncate the log? I
am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM|||Just because you don't need the tran log doesn't mean SQL Server doesn't<g>.
The tran log is crucial to proper operation. I am not sure what you are
asking for here but I think you are misusing the term truncation. I suspect
what you really what is to shrink the log. A truncation does not do that and
in Simple mode it will truncate all on it's own when it gets about 70% full.
If you want to shrink the size of the log then use DBCC SHRINKFILE. Just
make sure it is large enough to hold the largest transaction you will do or
it will just grow again.
--
Andrew J. Kelly SQL MVP
"WayneM" <WayneM@.discussions.microsoft.com> wrote in message
news:31259267-F903-4773-8842-5BF2EFA896A4@.microsoft.com...
>I have a SQL Server database on a server with limited disk space and I do
>not
> need the transaction log. Is there a TSQL command to truncate the log? I
> am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM|||Mike,
My db is 1.4GB and the log is 2.0 GB. This in itself is not big, but there
are several databases in the server. Each DB has files that are parsed and
loaded into the database, so the concept of transactions does not apply here.
We just keep the files and re-parse and load them if necessary.
Thanks,
Wayne
"Mike Epprecht (SQL MVP)" wrote:
> Hi
> Simple Recovery Mode forces the log to be truncated at every checkpoint.
> Checkpoint runs by default every minute. Only the inactive transactions can
> be truncated.
> But, If you have a transaction that affects 100'000 records, until the
> transaction is complete (committed or rolled back), those records can not be
> truncated. The log has to store them all. So, keep your transactions short.
> How big is your DB, your transaction log?
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> IM: mike@.epprecht.net
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
> "WayneM" <WayneM@.discussions.microsoft.com> wrote in message
> news:31259267-F903-4773-8842-5BF2EFA896A4@.microsoft.com...
> > I have a SQL Server database on a server with limited disk space and I do
> not
> > need the transaction log. Is there a TSQL command to truncate the log? I
> am
> > using a Simple Recovery Model, but it is still keeping a transaction log.
> >
> > Do I have to force a check point programatically?
> >
> > Any suggestions would be appreciated.
> >
> > Thanks in advance,
> >
> > WayneM
>
>|||> so the concept of transactions does not apply here.
You don't seem to understand how Sql Server works as compare to some smaller
database engines. Everything is always in a transaction in Sql Server. It
may be explicit in which you manually issue a Begin Tran and a commit Tran
to wrap two or more statements to keep them Atomic. But even if you don't
each statement is Atomic by itself and is wrapped in a transaction behind
the scenes. If you bulk load 100K rows they will all be in a single
transaction whether you want them to be or not. This activity is always
logged in one form or another. If you do a minimally logged load the
extents that are affected are logged. Otherwise it is fully logged. You
need to meet the requirements listed in BooksOnLine under "minimally logged
bulk copy" to do this though. If you really do just load the files and the
db is replaceable then you should ensure your loads meet these conditions.
That will speed up the load and and allow you to keep the log file smaller.
--
Andrew J. Kelly SQL MVP
"WayneM" <WayneM@.discussions.microsoft.com> wrote in message
news:AAF9E36F-4784-43B1-B662-E9332884BB41@.microsoft.com...
> Mike,
> My db is 1.4GB and the log is 2.0 GB. This in itself is not big, but
> there
> are several databases in the server. Each DB has files that are parsed
> and
> loaded into the database, so the concept of transactions does not apply
> here.
> We just keep the files and re-parse and load them if necessary.
> Thanks,
> Wayne
> "Mike Epprecht (SQL MVP)" wrote:
>> Hi
>> Simple Recovery Mode forces the log to be truncated at every checkpoint.
>> Checkpoint runs by default every minute. Only the inactive transactions
>> can
>> be truncated.
>> But, If you have a transaction that affects 100'000 records, until the
>> transaction is complete (committed or rolled back), those records can not
>> be
>> truncated. The log has to store them all. So, keep your transactions
>> short.
>> How big is your DB, your transaction log?
>> Regards
>> --
>> Mike Epprecht, Microsoft SQL Server MVP
>> Zurich, Switzerland
>> IM: mike@.epprecht.net
>> MVP Program: http://www.microsoft.com/mvp
>> Blog: http://www.msmvps.com/epprecht/
>> "WayneM" <WayneM@.discussions.microsoft.com> wrote in message
>> news:31259267-F903-4773-8842-5BF2EFA896A4@.microsoft.com...
>> > I have a SQL Server database on a server with limited disk space and I
>> > do
>> not
>> > need the transaction log. Is there a TSQL command to truncate the log?
>> > I
>> am
>> > using a Simple Recovery Model, but it is still keeping a transaction
>> > log.
>> >
>> > Do I have to force a check point programatically?
>> >
>> > Any suggestions would be appreciated.
>> >
>> > Thanks in advance,
>> >
>> > WayneM
>>|||If you just want to get rid of the log. Then (after you have backed up the
DB) 'Detach Database' and then go the .LDF file, via explorer, usually in the
Program File\MSSQLServer|Data path, and delete it.
Then go back to Enterprise Manger and right clidk on Databases and attach
the DB, it will tell you it can't find the .LDF and will creae one of 1MB.
I am assuming that as you are using Simple Recovery, it either is not a
Production DB, or it only contains static data.
--
Cheers
Coburndavis
"WayneM" wrote:
> I have a SQL Server database on a server with limited disk space and I do not
> need the transaction log. Is there a TSQL command to truncate the log? I am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM|||Try backing up the DB, then use Enterprise Manager (EM) to 'Detach Dabtabase'
go to the .LDF (ie the log file) via explorer and delete it.
Go back to EM right click on Databases and 'Attach DB (point it to the .MDF
for the DB) it will tell you there is no .LDF, accept this and it will create
one of 1MB.
I am assuming as your recovery mode is Simple it is not a production DB, or
that the data is static
--
Cheers
Coburndavis
"WayneM" wrote:
> I have a SQL Server database on a server with limited disk space and I do not
> need the transaction log. Is there a TSQL command to truncate the log? I am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM|||Try backing up the DB, then use Enterprise Manager (EM) to 'Detach Dabtabase'
go to the .LDF (ie the log file) via explorer and delete it.
Go back to EM right click on Databases and 'Attach DB (point it to the .MDF
for the DB) it will tell you there is no .LDF, accept this and it will create
one of 1MB.
I am assuming, as your recovery mode is Simple, it is not a production DB,
or that the data is static
--
Cheers
Coburndavis
"WayneM" wrote:
> I have a SQL Server database on a server with limited disk space and I do not
> need the transaction log. Is there a TSQL command to truncate the log? I am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM|||Try backing up the DB, then use Enterprise Manager (EM) to 'Detach Dabtabase'
go to the .LDF (ie the log file) via explorer and delete it.
Go back to EM right click on Databases and 'Attach DB (point it to the .MDF
for the DB) it will tell you there is no .LDF, accept this and it will create
one of 1MB.
I am assuming, as your recovery mode is Simple, it is not a production DB,
or that the data is static
--
Cheers
Coburndavis
"WayneM" wrote:
> I have a SQL Server database on a server with limited disk space and I do not
> need the transaction log. Is there a TSQL command to truncate the log? I am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM|||Try backing up the DB, then use Enterprise Manager (EM) to 'Detach Dabtabase'
go to the .LDF (ie the log file) via explorer and delete it.
Go back to EM right click on Databases and 'Attach DB (point it to the .MDF
for the DB) it will tell you there is no .LDF, accept this and it will create
one of 1MB.
I am assuming, as your recovery mode is Simple, it is not a production DB,
or that the data is static
--
Cheers
Coburndavis
"WayneM" wrote:
> I have a SQL Server database on a server with limited disk space and I do not
> need the transaction log. Is there a TSQL command to truncate the log? I am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM|||Try backing up the DB, then use Enterprise Manager (EM) to 'Detach Dabtabase'
go to the .LDF (ie the log file) via explorer and delete it.
Go back to EM right click on Databases and 'Attach DB (point it to the .MDF
for the DB) it will tell you there is no .LDF, accept this and it will create
one of 1MB.
I am assuming, as your recovery mode is Simple, it is not a production DB,
or that the data is static
--
Cheers
Coburndavis
"WayneM" wrote:
> I have a SQL Server database on a server with limited disk space and I do not
> need the transaction log. Is there a TSQL command to truncate the log? I am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM|||Try backing up the DB, then use Enterprise Manager (EM) to 'Detach Dabtabase'
go to the .LDF (ie the log file) via explorer and delete it.
Go back to EM right click on Databases and 'Attach DB (point it to the .MDF
for the DB) it will tell you there is no .LDF, accept this and it will create
one of 1MB.
I am assuming, as your recovery mode is Simple, it is not a production DB,
or that the data is static
--
Cheers
Coburndavis
"WayneM" wrote:
> I have a SQL Server database on a server with limited disk space and I do not
> need the transaction log. Is there a TSQL command to truncate the log? I am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM|||Try backing up the DB, then use Enterprise Manager (EM) to 'Detach Dabtabase'
go to the .LDF (ie the log file) via explorer and delete it.
Go back to EM right click on Databases and 'Attach DB (point it to the .MDF
for the DB) it will tell you there is no .LDF, accept this and it will create
one of 1MB.
I am assuming, as your recovery mode is Simple, it is not a production DB,
or that the data is static
--
Cheers
Coburndavis
"WayneM" wrote:
> I have a SQL Server database on a server with limited disk space and I do not
> need the transaction log. Is there a TSQL command to truncate the log? I am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM|||Try backing up the DB, then use Enterprise Manager (EM) to 'Detach Dabtabase'
go to the .LDF (ie the log file) via explorer and delete it.
Go back to EM right click on Databases and 'Attach DB (point it to the .MDF
for the DB) it will tell you there is no .LDF, accept this and it will create
one of 1MB.
I am assuming, as your recovery mode is Simple, it is not a production DB,
or that the data is static
--
Cheers
Coburndavis
"WayneM" wrote:
> I have a SQL Server database on a server with limited disk space and I do not
> need the transaction log. Is there a TSQL command to truncate the log? I am
> using a Simple Recovery Model, but it is still keeping a transaction log.
> Do I have to force a check point programatically?
> Any suggestions would be appreciated.
> Thanks in advance,
> WayneM
How to truncate a transaction log manually?
thank you very muchBackup Log with truncate_only
"Alan" <alanchang@.jetmaster.com.tw> wrote in message
news:05a301c35af5$38f259c0$a601280a@.phx.gbl...
> please help me
> thank you very much|||Cheapest is this one:
http://www.msde.biz/msdebackup/msdebackup.htm
only 14$ !!!
Here you can find some severe problems with SQL Server, incompatible
modes and problems with transactions and backups:
http://www.itworld.com/nl/db_mgr/09032001/
"Lumigent Technologies has released a new version of Log Explorer, one
of the best-selling third-party products in the SQL Server space. Log
Explorer reads a SQL Server 2000 or 7.0 transaction log and decodes the
information into a readable format. Log Explorer lets you analyze
transaction histories, but most important, it lets you selectively
recover data. The newest release, Log Explorer 3.03, adds multiple
enhancements to this valuable capability."
regards, Guido Stepken
Alan wrote:
> please help me
> thank you very much|||http://support.microsoft.com/default.aspx?scid=kb;EN-US;272318
--
BR,
Mark Broadbent mcdba,mcse+i
_________________________
"Alan" <alanchang@.jetmaster.com.tw> wrote in message
news:05a301c35af5$38f259c0$a601280a@.phx.gbl...
> please help me
> thank you very much
How to truncate a str?
Hello All,
I am doing some data extraction from a database table: custtable. The customers have zip codes of 9 digits however I just need the first five of them. I am not sure of the syntax. Can anyone suggest me on this one?
This is what my statement is
where ((len (ct.zipcode ) > 5 and ct.country = '001')or ct.country <> '001')
This one returns customers with zipcodes > 5 and I need to only truncate this, coz my database also contains cust with zip code of only 5 digits and I have extracted that data already. Now I need all the zipcodes that have a length of 9 to be truncted to 5.
An earliest possible response is highly appreciable.
Thanks,
Rashi
Hello Rashi,
Try this:
select left(ct.zipcode, 5)
where ((len (ct.zipcode ) > 5 and ct.country = '001')or ct.country <> '001')
Hope this helps.
Jarret
|||useless post|||Even for T-SQL, it's still is
SELECT LEFT(postal_code, 5)
FROM AddressTable
Shouldn't need to, but if you want to trim it - RTRIM(LTRIM(LEFT(postal_code,5)))
How to truncate a log file
control.
Could you please enlighten me to the steps to truncate
this log file?
Thanks
BillBackup log dbname with truncate_only
If you do not require point in time recovery you might turn on "truncate log
on checkpoint", if you do, schedule regular log backups. See "backup" in BOL
HTH
Ray Higdon MCSE, MCDBA, CCNA
--
"bill dunn" <anonymous@.discussions.microsoft.com> wrote in message
news:374C8E81-0C23-4F35-A2B2-2985243A864A@.microsoft.com...
quote:|||Thanks Ray.|||Hi,
> I've got a SQL V7 server whose log file has grown out of
> control.
> Could you please enlighten me to the steps to truncate
> this log file?
> Thanks
> Bill
>
In addition to Ray's post , do a DBCC SHRINK file on the transaction log
file.
This will shrink your transaction log file to the minimum size.
For more information on DBCC SHRINKFILE, refer BOL
Thanks
Hari
MCDBA
"bill dunn" <anonymous@.discussions.microsoft.com> wrote in message
news:68488509-6364-446B-B118-4B41FFCFDAFC@.microsoft.com...
quote:|||Hi
> Thanks Ray.
You may want to read
http://msdn.microsoft.com/library/d...r />
_1uzr.asp
and check previous posts in these newsgroups.
John
"bill dunn" <anonymous@.discussions.microsoft.com> wrote in message
news:374C8E81-0C23-4F35-A2B2-2985243A864A@.microsoft.com...
quote:sql
> I've got a SQL V7 server whose log file has grown out of
> control.
> Could you please enlighten me to the steps to truncate
> this log file?
> Thanks
> Bill
>
how to truncate a log file
i am truncating the log file with the command
backup log UserManager with truncate_only ,this is not truncating up to mark.
is there any way to truncate the logfile effectively.now my log file is going in GBs.
please help
thanks
reddy
Hi,
There will be still open transactioin inside the database, that is the
reason log is not getting truncated.
How to chek if there is any open trasnactions:-
DBCC OPENTRAN('db_name')
if there is active open transactions, then you may need to wait till the
trasactiuon ends. Otherwise start the database in single user:-
ALTER database <dbname> set single_user with rollback immediate
go
backup log <dbname> with truncate_only
go
Execute the below command to see the log size:
DBCC SQLPERF(LOGSPACE)
You can shrink the physical LDF file usinge the below command:-
DBCCSHRINKFILE('logical_ldf_name',truncateonly)
Execute DBCC SQLPERF(LOGSPACE) again to get the file size and usage.
Thanks
Hari
MCDBA
"reddy" <reddy@.discussions.microsoft.com> wrote in message
news:C7018680-A788-481B-AABF-8BD677461A90@.microsoft.com...
> hi friends,
> i am truncating the log file with the command
> backup log UserManager with truncate_only ,this is not truncating up to
mark.
> is there any way to truncate the logfile effectively.now my log file is
going in GBs.
> please help
> thanks
> reddy
>
>
|||hari,
thank you for ur help
i have one doubt hari,if i execute the following command in QA where it will backup the log,is there any default path for it.what should be the file name.
backup log <dbname> with truncate_only
if i use the DBCCSHRINKFILE('logical_ldf_name',truncateonly) command my log file suddenly shrinked from 2GB to 1MB.because of this any details i lost.if i lost how can i recover.
please explain.with out testing on dummy database i applied the commands on production server.is there any data loss with this.
thanks®ards
reddy
|||Hi Reddy,
backup log <dbname> with truncate_only :- This command will not backup the
transaction log. This will remove the
inactive part of the transaction log with out backup. So if you need to
backup the trasnqaction log use the below command:-
backup log <dbname> to disk='c:\backup\dbname.trn'
This will backup the transaction log to c:\backup folder.
DBCCSHRINKFILE('logical_ldf_name',truncateonly) command my log file suddenly
shrinked from 2GB to 1MB.because of this any details i lost.if i lost how
can i recover.?
No , this command will shrink the physical trasnaction log file. Say you
had 2 GB transaction log file, but out of that only 1 MB was used. AFter
issuing the command the entry was shunk.
Sure, Absolutely no data loss will happen. Only thing is insted of backup
log <dbname> with truncate_only , use the
backup log <dbname> to disk='c:\backup\dbname.trn' to backup the trasnaction
log in frequent intervals.
Thanks
Hari
MCDBA
"reddy" <reddy@.discussions.microsoft.com> wrote in message
news:DFA68B92-4FF1-4570-8786-BA5EC1C08C6F@.microsoft.com...
> hari,
> thank you for ur help
> i have one doubt hari,if i execute the following command in QA where it
will backup the log,is there any default path for it.what should be the file
name.
> backup log <dbname> with truncate_only
> if i use the DBCCSHRINKFILE('logical_ldf_name',truncateonly) command my
log file suddenly shrinked from 2GB to 1MB.because of this any details i
lost.if i lost how can i recover.
> please explain.with out testing on dummy database i applied the commands
on production server.is there any data loss with this.
> thanks®ards
> reddy
>
>
|||hari,
thank you very much for ur help.
thanks
reddy
"Hari" wrote:
> Hi Reddy,
> backup log <dbname> with truncate_only :- This command will not backup the
> transaction log. This will remove the
> inactive part of the transaction log with out backup. So if you need to
> backup the trasnqaction log use the below command:-
> backup log <dbname> to disk='c:\backup\dbname.trn'
> This will backup the transaction log to c:\backup folder.
> DBCCSHRINKFILE('logical_ldf_name',truncateonly) command my log file suddenly
> shrinked from 2GB to 1MB.because of this any details i lost.if i lost how
> can i recover.?
> No , this command will shrink the physical trasnaction log file. Say you
> had 2 GB transaction log file, but out of that only 1 MB was used. AFter
> issuing the command the entry was shunk.
> Sure, Absolutely no data loss will happen. Only thing is insted of backup
> log <dbname> with truncate_only , use the
> backup log <dbname> to disk='c:\backup\dbname.trn' to backup the trasnaction
> log in frequent intervals.
> --
> Thanks
> Hari
> MCDBA
> "reddy" <reddy@.discussions.microsoft.com> wrote in message
> news:DFA68B92-4FF1-4570-8786-BA5EC1C08C6F@.microsoft.com...
> will backup the log,is there any default path for it.what should be the file
> name.
> log file suddenly shrinked from 2GB to 1MB.because of this any details i
> lost.if i lost how can i recover.
> on production server.is there any data loss with this.
>
>
How to truncate a log file
control.
Could you please enlighten me to the steps to truncate
this log file
Thank
BillBackup log dbname with truncate_only
If you do not require point in time recovery you might turn on "truncate log
on checkpoint", if you do, schedule regular log backups. See "backup" in BOL
HTH
--
Ray Higdon MCSE, MCDBA, CCNA
--
"bill dunn" <anonymous@.discussions.microsoft.com> wrote in message
news:374C8E81-0C23-4F35-A2B2-2985243A864A@.microsoft.com...
> I've got a SQL V7 server whose log file has grown out of
> control.
> Could you please enlighten me to the steps to truncate
> this log file?
> Thanks
> Bill
>|||Thanks Ray.|||Hi,
In addition to Ray's post , do a DBCC SHRINK file on the transaction log
file.
This will shrink your transaction log file to the minimum size.
For more information on DBCC SHRINKFILE, refer BOL
Thanks
Hari
MCDBA
"bill dunn" <anonymous@.discussions.microsoft.com> wrote in message
news:68488509-6364-446B-B118-4B41FFCFDAFC@.microsoft.com...
> Thanks Ray.|||Hi
You may want to read
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_da2_1uzr.asp
and check previous posts in these newsgroups.
John
"bill dunn" <anonymous@.discussions.microsoft.com> wrote in message
news:374C8E81-0C23-4F35-A2B2-2985243A864A@.microsoft.com...
> I've got a SQL V7 server whose log file has grown out of
> control.
> Could you please enlighten me to the steps to truncate
> this log file?
> Thanks
> Bill
>
How to Truncate
I also want to truncate some data which is a floating point number. But
it is calculated as a result of two other columns.
for example lets say
column1: 2,4
column2: 4,1483
the result is: 9,95592
but I want to truncate the first 2 digits after the decimal point.
like 9,95
also what is the rounding formula and how to use?
Thanks in advance.You're really talking about display formatting -which is better done in the
client application.
If you must have SQL Server change the format of this computed value, try:
SELECT cast( (Column1 * Column2 ) AS decimal(10.2) )
Arnie Rowland, Ph.D.
Westwood Consulting, Inc
Most good judgment comes from experience.
Most experience comes from bad judgment.
- Anonymous
"Dot Net Daddy" <cagriandac@.gmail.com> wrote in message
news:1154637900.501096.298380@.m73g2000cwd.googlegroups.com...
> Hello again,
> I also want to truncate some data which is a floating point number. But
> it is calculated as a result of two other columns.
> for example lets say
> column1: 2,4
> column2: 4,1483
> the result is: 9,95592
> but I want to truncate the first 2 digits after the decimal point.
> like 9,95
> also what is the rounding formula and how to use?
>
> Thanks in advance.
>
How to Truncate
I also want to truncate some data which is a floating point number. But
it is calculated as a result of two other columns.
for example lets say
column1: 2,4
column2: 4,1483
the result is: 9,95592
but I want to truncate the first 2 digits after the decimal point.
like 9,95
also what is the rounding formula and how to use?
Thanks in advance.You're really talking about display formatting -which is better done in the
client application.
If you must have SQL Server change the format of this computed value, try:
SELECT cast( (Column1 * Column2 ) AS decimal(10.2) )
--
Arnie Rowland, Ph.D.
Westwood Consulting, Inc
Most good judgment comes from experience.
Most experience comes from bad judgment.
- Anonymous
"Dot Net Daddy" <cagriandac@.gmail.com> wrote in message
news:1154637900.501096.298380@.m73g2000cwd.googlegroups.com...
> Hello again,
> I also want to truncate some data which is a floating point number. But
> it is calculated as a result of two other columns.
> for example lets say
> column1: 2,4
> column2: 4,1483
> the result is: 9,95592
> but I want to truncate the first 2 digits after the decimal point.
> like 9,95
> also what is the rounding formula and how to use?
>
> Thanks in advance.
>
How to troubleshoot intermittent lockups
Hi,
I'm an accidental DBA and I was wondering how to troubleshoot what causes Sql Server to lock up some times. When I say “lock up”, I don’t mean completely. I’m talking about a situation everything’s working fine and all of a sudden we start getting complaints that users can’t get into or use SuchAndSuch application which uses Sql Server. So it's not to the point where I have to use the dedicated administratrive connection...but it is simply not responding to requests!
I check the replication monitor and all of our (push subscription) replication processes are backed up and all of them say “Synchronizing” and they’re not making any progress.
I check the activity monitor and if I trace the “blocked by” column I can usually see that there are several processes blocking other things. But it’s hard to determine the root cause, because once one thing blocks another thing, that causes a chain reaction (especially once the replication processes start kicking in).
For instance, this happened earlier today and there were several processes that said “blocked by” process 13. Yet there was no process 13 in the activity monitor (yes I had it ordered by ProcessID and I checked it twice).
I know how to start a profiler trace, but I’m never sure how to interpret the plethora of results to make any type of intelligent determination from them.
Usually what happens is I sit there browsing through the task manager, replication monitor and activity monitor until it *magically* starts responding on its own just as suddenly as it stopped! Of course, the users ask “how did you fix it”? –To which I have no good answer.
If your database is not responding what is the easiest way to determine the root cause? Trial and error takes too long with an intermittent problem and this solution never satisfies upper management!
Thanks,
Nate
Hi Nate,
You should invest a few hours of your time in this webcasts (the return of investment is great IMHO):
"TechNet Support WebCast:Performance troubleshooting and analysis in Microsoft SQL Server 2000" http://support.microsoft.com/kb/838622/en-us
"TechNet Webcast: Troubleshooting Performance Problems in Microsoft SQL Server 2005" http://msevents.microsoft.com/cui/eventdetail.aspx?eventID=1032275646&Culture=en-US
"TechNet Webcast: SQL Server 2005 Troubleshooting: Supportability Features for SQL 2005"
http://msevents.microsoft.com/cui/eventdetail.aspx?eventID=1032275593&Culture=en-US
"TechNet Webcast: Performance Diagnosis in SQL Server 2005"
http://msevents.microsoft.com/cui/eventdetail.aspx?eventID=1032275586&Culture=en-US
For SQL Server 2005 you should take a look at this whitepaper: "Troubleshooting Performance Problems in SQL Server 2005" http://www.microsoft.com/technet/prodtechnol/sql/2005/tsprfprb.mspx
|||Thanks Cristian. I will definitely check these out!sqlHow to troubleshoot a strange result from a stored procedure?
part of testing, the results are odd from what I am expecting. The testing
is running on 2 databases which restored from the same backup with one
database set with read-only option. This will allows me to check the
original code with modified code. These databases are on my server;
therefore, it is lock down.
The stored procedure has 15 pass-in variables with default values are NULL.
If I modify the stored procedure by adding a non-essential statement such as
SELECT 1 or SELECT getdate() anywhere in the stored procedure, the procedure
return different result.
Test query: exec GetValues @.FName = 'Smith%', @.FName = default, @.Price =
default, @.CustType = default
Part of the existing stored procedure:
if (@.Price is not null or @.Price <> '') and (@.CustomerType is not null or
@.CustType <> '')
select * from CustTable
where @.LastName is null or customer.FName like rtrim(@.LastName)
else
select 'No data'
When I run this on the read-only database, it returns some rows. But when I
run this on the test database with modified code, it returns no data. The
stored procedures from both database are basically same with exception of
adding a non-essential statement (SELECT 1 or SELECT getdate()). Based on
what test query, it should return no data on both databases, but the results
are different.
Is there a tool or methods to figure out the strange results from this
stored procedure? I tested with different methods but come out empty-handed
.
Please help! Thanks!Correction to part of the stored procedure:
if (@.Price is not null or @.Price <> '') and (@.CustomerType is not null or
@.CustType <> '')
select * from CustTable
where @.LastName is null or LName like rtrim(@.LastName)
else
select 'No data'
"KTN" wrote:
> I am trying to tune a stored procedure that is long and running slow. As
> part of testing, the results are odd from what I am expecting. The testin
g
> is running on 2 databases which restored from the same backup with one
> database set with read-only option. This will allows me to check the
> original code with modified code. These databases are on my server;
> therefore, it is lock down.
> The stored procedure has 15 pass-in variables with default values are NULL
.
> If I modify the stored procedure by adding a non-essential statement such
as
> SELECT 1 or SELECT getdate() anywhere in the stored procedure, the procedu
re
> return different result.
>
> Test query: exec GetValues @.FName = 'Smith%', @.FName = default, @.Price =
> default, @.CustType = default
> Part of the existing stored procedure:
> if (@.Price is not null or @.Price <> '') and (@.CustomerType is not null or
> @.CustType <> '')
> select * from CustTable
> where @.LastName is null or customer.FName like rtrim(@.LastName)
> else
> select 'No data'
>
> When I run this on the read-only database, it returns some rows. But when
I
> run this on the test database with modified code, it returns no data. The
> stored procedures from both database are basically same with exception of
> adding a non-essential statement (SELECT 1 or SELECT getdate()). Based on
> what test query, it should return no data on both databases, but the resul
ts
> are different.
> Is there a tool or methods to figure out the strange results from this
> stored procedure? I tested with different methods but come out empty-hand
ed.
> Please help! Thanks!|||KTN
Do you investigate how to optimize the SP or why it returns wrong result?
Have you look into an execution plan? Was the optimizer available to use
indexes? How much data do you return?
"KTN" <KTN@.discussions.microsoft.com> wrote in message
news:47DD3664-5829-40E5-92B8-A5EE1E8D9C45@.microsoft.com...
>I am trying to tune a stored procedure that is long and running slow. As
> part of testing, the results are odd from what I am expecting. The
> testing
> is running on 2 databases which restored from the same backup with one
> database set with read-only option. This will allows me to check the
> original code with modified code. These databases are on my server;
> therefore, it is lock down.
> The stored procedure has 15 pass-in variables with default values are
> NULL.
> If I modify the stored procedure by adding a non-essential statement such
> as
> SELECT 1 or SELECT getdate() anywhere in the stored procedure, the
> procedure
> return different result.
>
> Test query: exec GetValues @.FName = 'Smith%', @.FName = default, @.Price =
> default, @.CustType = default
> Part of the existing stored procedure:
> if (@.Price is not null or @.Price <> '') and (@.CustomerType is not null or
> @.CustType <> '')
> select * from CustTable
> where @.LastName is null or customer.FName like rtrim(@.LastName)
> else
> select 'No data'
>
> When I run this on the read-only database, it returns some rows. But when
> I
> run this on the test database with modified code, it returns no data. The
> stored procedures from both database are basically same with exception of
> adding a non-essential statement (SELECT 1 or SELECT getdate()). Based on
> what test query, it should return no data on both databases, but the
> results
> are different.
> Is there a tool or methods to figure out the strange results from this
> stored procedure? I tested with different methods but come out
> empty-handed.
> Please help! Thanks!|||My plan is to optimize the SP and in part of doing that, I found wrong
results with adding non-essential statements such as SELECT getddate()
I looked at the execution plan for the new code is much better via using
some covering index and better query plans.
The data return is very based on the 15 pass-in variables, but mostly under
100 rows.
"Uri Dimant" wrote:
> KTN
> Do you investigate how to optimize the SP or why it returns wrong result?
> Have you look into an execution plan? Was the optimizer available to use
> indexes? How much data do you return?
>
>
>
> "KTN" <KTN@.discussions.microsoft.com> wrote in message
> news:47DD3664-5829-40E5-92B8-A5EE1E8D9C45@.microsoft.com...
>
>
How to troubleshoot a strange result from a stored procedure?
part of testing, the results are odd from what I am expecting. The testing
is running on 2 databases which restored from the same backup with one
database set with read-only option. This will allows me to check the
original code with modified code. These databases are on my server;
therefore, it is lock down.
The stored procedure has 15 pass-in variables with default values are NULL.
If I modify the stored procedure by adding a non-essential statement such as
SELECT 1 or SELECT getdate() anywhere in the stored procedure, the procedure
return different result.
Test query: exec GetValues @.FName = 'Smith%', @.FName = default, @.Price =
default, @.CustType = default
Part of the existing stored procedure:
if (@.Price is not null or @.Price <> '') and (@.CustomerType is not null or
@.CustType <> '')
select * from CustTable
where @.LastName is null or customer.FName like rtrim(@.LastName)
else
select 'No data'
When I run this on the read-only database, it returns some rows. But when I
run this on the test database with modified code, it returns no data. The
stored procedures from both database are basically same with exception of
adding a non-essential statement (SELECT 1 or SELECT getdate()). Based on
what test query, it should return no data on both databases, but the results
are different.
Is there a tool or methods to figure out the strange results from this
stored procedure? I tested with different methods but come out empty-handed.
Please help! Thanks!
Correction to part of the stored procedure:
if (@.Price is not null or @.Price <> '') and (@.CustomerType is not null or
@.CustType <> '')
select * from CustTable
where @.LastName is null or LName like rtrim(@.LastName)
else
select 'No data'
"KTN" wrote:
> I am trying to tune a stored procedure that is long and running slow. As
> part of testing, the results are odd from what I am expecting. The testing
> is running on 2 databases which restored from the same backup with one
> database set with read-only option. This will allows me to check the
> original code with modified code. These databases are on my server;
> therefore, it is lock down.
> The stored procedure has 15 pass-in variables with default values are NULL.
> If I modify the stored procedure by adding a non-essential statement such as
> SELECT 1 or SELECT getdate() anywhere in the stored procedure, the procedure
> return different result.
>
> Test query: exec GetValues @.FName = 'Smith%', @.FName = default, @.Price =
> default, @.CustType = default
> Part of the existing stored procedure:
> if (@.Price is not null or @.Price <> '') and (@.CustomerType is not null or
> @.CustType <> '')
> select * from CustTable
> where @.LastName is null or customer.FName like rtrim(@.LastName)
> else
> select 'No data'
>
> When I run this on the read-only database, it returns some rows. But when I
> run this on the test database with modified code, it returns no data. The
> stored procedures from both database are basically same with exception of
> adding a non-essential statement (SELECT 1 or SELECT getdate()). Based on
> what test query, it should return no data on both databases, but the results
> are different.
> Is there a tool or methods to figure out the strange results from this
> stored procedure? I tested with different methods but come out empty-handed.
> Please help! Thanks!
|||KTN
Do you investigate how to optimize the SP or why it returns wrong result?
Have you look into an execution plan? Was the optimizer available to use
indexes? How much data do you return?
"KTN" <KTN@.discussions.microsoft.com> wrote in message
news:47DD3664-5829-40E5-92B8-A5EE1E8D9C45@.microsoft.com...
>I am trying to tune a stored procedure that is long and running slow. As
> part of testing, the results are odd from what I am expecting. The
> testing
> is running on 2 databases which restored from the same backup with one
> database set with read-only option. This will allows me to check the
> original code with modified code. These databases are on my server;
> therefore, it is lock down.
> The stored procedure has 15 pass-in variables with default values are
> NULL.
> If I modify the stored procedure by adding a non-essential statement such
> as
> SELECT 1 or SELECT getdate() anywhere in the stored procedure, the
> procedure
> return different result.
>
> Test query: exec GetValues @.FName = 'Smith%', @.FName = default, @.Price =
> default, @.CustType = default
> Part of the existing stored procedure:
> if (@.Price is not null or @.Price <> '') and (@.CustomerType is not null or
> @.CustType <> '')
> select * from CustTable
> where @.LastName is null or customer.FName like rtrim(@.LastName)
> else
> select 'No data'
>
> When I run this on the read-only database, it returns some rows. But when
> I
> run this on the test database with modified code, it returns no data. The
> stored procedures from both database are basically same with exception of
> adding a non-essential statement (SELECT 1 or SELECT getdate()). Based on
> what test query, it should return no data on both databases, but the
> results
> are different.
> Is there a tool or methods to figure out the strange results from this
> stored procedure? I tested with different methods but come out
> empty-handed.
> Please help! Thanks!
|||My plan is to optimize the SP and in part of doing that, I found wrong
results with adding non-essential statements such as SELECT getddate()
I looked at the execution plan for the new code is much better via using
some covering index and better query plans.
The data return is very based on the 15 pass-in variables, but mostly under
100 rows.
"Uri Dimant" wrote:
> KTN
> Do you investigate how to optimize the SP or why it returns wrong result?
> Have you look into an execution plan? Was the optimizer available to use
> indexes? How much data do you return?
>
>
>
> "KTN" <KTN@.discussions.microsoft.com> wrote in message
> news:47DD3664-5829-40E5-92B8-A5EE1E8D9C45@.microsoft.com...
>
>
How to troubleshoot a strange result from a stored procedure?
part of testing, the results are odd from what I am expecting. The testing
is running on 2 databases which restored from the same backup with one
database set with read-only option. This will allows me to check the
original code with modified code. These databases are on my server;
therefore, it is lock down.
The stored procedure has 15 pass-in variables with default values are NULL.
If I modify the stored procedure by adding a non-essential statement such as
SELECT 1 or SELECT getdate() anywhere in the stored procedure, the procedure
return different result.
Test query: exec GetValues @.FName = 'Smith%', @.FName = default, @.Price = default, @.CustType = default
Part of the existing stored procedure:
if (@.Price is not null or @.Price <> '') and (@.CustomerType is not null or
@.CustType <> '')
select * from CustTable
where @.LastName is null or customer.FName like rtrim(@.LastName)
else
select 'No data'
When I run this on the read-only database, it returns some rows. But when I
run this on the test database with modified code, it returns no data. The
stored procedures from both database are basically same with exception of
adding a non-essential statement (SELECT 1 or SELECT getdate()). Based on
what test query, it should return no data on both databases, but the results
are different.
Is there a tool or methods to figure out the strange results from this
stored procedure? I tested with different methods but come out empty-handed.
Please help! Thanks!Correction to part of the stored procedure:
if (@.Price is not null or @.Price <> '') and (@.CustomerType is not null or
@.CustType <> '')
select * from CustTable
where @.LastName is null or LName like rtrim(@.LastName)
else
select 'No data'
"KTN" wrote:
> I am trying to tune a stored procedure that is long and running slow. As
> part of testing, the results are odd from what I am expecting. The testing
> is running on 2 databases which restored from the same backup with one
> database set with read-only option. This will allows me to check the
> original code with modified code. These databases are on my server;
> therefore, it is lock down.
> The stored procedure has 15 pass-in variables with default values are NULL.
> If I modify the stored procedure by adding a non-essential statement such as
> SELECT 1 or SELECT getdate() anywhere in the stored procedure, the procedure
> return different result.
>
> Test query: exec GetValues @.FName = 'Smith%', @.FName = default, @.Price => default, @.CustType = default
> Part of the existing stored procedure:
> if (@.Price is not null or @.Price <> '') and (@.CustomerType is not null or
> @.CustType <> '')
> select * from CustTable
> where @.LastName is null or customer.FName like rtrim(@.LastName)
> else
> select 'No data'
>
> When I run this on the read-only database, it returns some rows. But when I
> run this on the test database with modified code, it returns no data. The
> stored procedures from both database are basically same with exception of
> adding a non-essential statement (SELECT 1 or SELECT getdate()). Based on
> what test query, it should return no data on both databases, but the results
> are different.
> Is there a tool or methods to figure out the strange results from this
> stored procedure? I tested with different methods but come out empty-handed.
> Please help! Thanks!|||KTN
Do you investigate how to optimize the SP or why it returns wrong result?
Have you look into an execution plan? Was the optimizer available to use
indexes? How much data do you return?
"KTN" <KTN@.discussions.microsoft.com> wrote in message
news:47DD3664-5829-40E5-92B8-A5EE1E8D9C45@.microsoft.com...
>I am trying to tune a stored procedure that is long and running slow. As
> part of testing, the results are odd from what I am expecting. The
> testing
> is running on 2 databases which restored from the same backup with one
> database set with read-only option. This will allows me to check the
> original code with modified code. These databases are on my server;
> therefore, it is lock down.
> The stored procedure has 15 pass-in variables with default values are
> NULL.
> If I modify the stored procedure by adding a non-essential statement such
> as
> SELECT 1 or SELECT getdate() anywhere in the stored procedure, the
> procedure
> return different result.
>
> Test query: exec GetValues @.FName = 'Smith%', @.FName = default, @.Price => default, @.CustType = default
> Part of the existing stored procedure:
> if (@.Price is not null or @.Price <> '') and (@.CustomerType is not null or
> @.CustType <> '')
> select * from CustTable
> where @.LastName is null or customer.FName like rtrim(@.LastName)
> else
> select 'No data'
>
> When I run this on the read-only database, it returns some rows. But when
> I
> run this on the test database with modified code, it returns no data. The
> stored procedures from both database are basically same with exception of
> adding a non-essential statement (SELECT 1 or SELECT getdate()). Based on
> what test query, it should return no data on both databases, but the
> results
> are different.
> Is there a tool or methods to figure out the strange results from this
> stored procedure? I tested with different methods but come out
> empty-handed.
> Please help! Thanks!|||My plan is to optimize the SP and in part of doing that, I found wrong
results with adding non-essential statements such as SELECT getddate()
I looked at the execution plan for the new code is much better via using
some covering index and better query plans.
The data return is very based on the 15 pass-in variables, but mostly under
100 rows.
"Uri Dimant" wrote:
> KTN
> Do you investigate how to optimize the SP or why it returns wrong result?
> Have you look into an execution plan? Was the optimizer available to use
> indexes? How much data do you return?
>
>
>
> "KTN" <KTN@.discussions.microsoft.com> wrote in message
> news:47DD3664-5829-40E5-92B8-A5EE1E8D9C45@.microsoft.com...
> >I am trying to tune a stored procedure that is long and running slow. As
> > part of testing, the results are odd from what I am expecting. The
> > testing
> > is running on 2 databases which restored from the same backup with one
> > database set with read-only option. This will allows me to check the
> > original code with modified code. These databases are on my server;
> > therefore, it is lock down.
> >
> > The stored procedure has 15 pass-in variables with default values are
> > NULL.
> > If I modify the stored procedure by adding a non-essential statement such
> > as
> > SELECT 1 or SELECT getdate() anywhere in the stored procedure, the
> > procedure
> > return different result.
> >
> >
> > Test query: exec GetValues @.FName = 'Smith%', @.FName = default, @.Price => > default, @.CustType = default
> >
> > Part of the existing stored procedure:
> > if (@.Price is not null or @.Price <> '') and (@.CustomerType is not null or
> > @.CustType <> '')
> > select * from CustTable
> > where @.LastName is null or customer.FName like rtrim(@.LastName)
> > else
> > select 'No data'
> >
> >
> > When I run this on the read-only database, it returns some rows. But when
> > I
> > run this on the test database with modified code, it returns no data. The
> > stored procedures from both database are basically same with exception of
> > adding a non-essential statement (SELECT 1 or SELECT getdate()). Based on
> > what test query, it should return no data on both databases, but the
> > results
> > are different.
> >
> > Is there a tool or methods to figure out the strange results from this
> > stored procedure? I tested with different methods but come out
> > empty-handed.
> > Please help! Thanks!
>
>
How to trouble shoot a deadlock problem
Thanks
LystrWell, you could run Enterprise Manager, go to Manage->Current activity and look at process info to find the locking process. Look at the locks / process id / object.
Try running your queries with the (nolock) hint or use temp tables.|||The message I am getting is:
The VB Application identified by the event source logged this Application ANSDAC: Thread ID: 2768, Logged: Microsoft OLE DB Provider for ODBC Drivers Session.Save() -2147467259 [Microsoft][ODBC SQL Server Driver][SQL Server]Transaction (Process ID 66) was deadlocked on lock resources with another process and has been chosed as the deadlock victim. Rerun the transaction.
Would I need to rerun the process ID 66?|||Yes, whatever query/statement being run by that process was aborted due to deadlocking. It must be re-run.
Try running your job on non-peak/business hours, if possible. What job are you running, btw?|||Then name of the job is call Archive IIS Log which consists of 4 steps
1-Update department-run sp_Update_Log
2-Copy to Archive Database-run sp_ArchiveLog
3-Clean up Log-run sp_cleaningLog
4-Free log space-Backup Log IISLog with No_Log
It runs at 9am everyday which is a busy time. The developer feel that the solution should be to move the database to another server. The Job did finish it just took 1 and 55 minutes, but no one can tell me how long do the job normally take.
So I should run each stored procedure.
Also, when I look at process ID 66
this is what I see"
SELECT MonthEnd."Accounting_date", Manual_Adjustment."Accounting date", Manual_Adjustment."Adjustment_amount", Manual_Adjustment."CheckNumber" FROM { oj "MonthEnd" MonthEnd INNER JOIN "ManualAdjustment" Manual_Adjustment ON MonthEnd."Accounting_date" = Ma
However this is not the database that they are complaining about. Is it possible for 66 to be connected with more than 1 database?
Thanks
Lystra|||use Trace Flags. Check out in Books Online about them else search on google key words [TRACE FLAG, DEADLOCK].|||Yes, process 66 can change the database it is running against at any time.
-When it is blocking, look at the blocking process (the job process) and see what database and query/statement it is running.
-Also look at running it at off peak hours if at all possible. If it is something that updates a bunch of records in a single table you are bound to cause locking. Espcially if this thing runs that long.
-On the database they are complaining about - do the job scripts have a lot of references to this table or are there only a few? If there are a few it should be easier to pinpoint which statement is causing the locking.
-Try starting Sql Profiler before the job runs and let it run to track all sql statements made by the job. As soon as the blocking stops, look to see if any long queries just finished running in the profiler. These could be the culprits.
Once you know the statements causing the issue you can better determine how to resolve the issue.|||First order of business, go beat the developer soundly for scheduling an automated process during a high usage time for users. That may take a while to beat them soundly, but it is an important step, so I'll wait while you finish.
Now that they've been beaten, go back and beat them again. That might make them think twice about such idiocy in the future.
Ok, that is as good a preventative as we can manage at the moment, so now we should move on to correcting the underlying problem. Pick the lowest usage period of the day when you can be certain that all the affected systems will be available. 03:00 works well for me, although some folks prefer 20:00 or something like it. Change the job so that it now runs at a low usage time instead of a high usage time.
After the job runs at its new time, check the SQL Server and NT logs for any error messages. Sometimes there are hidden dependancies that can trip you up when you move a job. If those both check out, then check the log tables that this process maintains to be sure that they also look as you'd expect.
Once you get done cleaning up the problem, go hose the blood off of the developer and carry them to a nice quiet closet somewhere to sleep off the beatings... There's no point in being mean about things, you just have to curb that behavior problem!
-PatP|||The database is called ISSLOG, which log all of the traffic from the web. Now why would this logging database cause deadlocking? The database log information like, clienthost, username, logtime, service, machine, serverIP, processing time, etc..,
I have added the code please look to see why it would cause a deadlock issue.
Thanks
Lystra|||I think my eyes just popped. I think if code1, code2 and code3 are run one after the other, there might be some other processing running that's causing the deadlock. If Code2 and Code3 are run at the same time, it might cause some trouble.|||Then name of the job is call Archive IIS Log which consists of 4 steps
1-Update department-run sp_Update_Log (CODE 3)
2-Copy to Archive Database-run sp_ArchiveLog (CODE 1)
3-Clean up Log-run sp_cleaningLog (CODE 3)
4-Free log space-Backup Log IISLog with No_Log
The code is ran one after the other.
If the code is looking to log traffic why should it create deadlock, because it's not looking at rows of data.
Thanks|||First order of business, go beat the developer soundly for scheduling an automated process during a high usage time for users. That may take a while to beat them soundly, but it is an important step, so I'll wait while you finish.
Now that they've been beaten, go back and beat them again. That might make them think twice about such idiocy in the future.
Ok, that is as good a preventative as we can manage at the moment, so now we should move on to correcting the underlying problem. Pick the lowest usage period of the day when you can be certain that all the affected systems will be available. 03:00 works well for me, although some folks prefer 20:00 or something like it. Change the job so that it now runs at a low usage time instead of a high usage time.
After the job runs at its new time, check the SQL Server and NT logs for any error messages. Sometimes there are hidden dependancies that can trip you up when you move a job. If those both check out, then check the log tables that this process maintains to be sure that they also look as you'd expect.
Once you get done cleaning up the problem, go hose the blood off of the developer and carry them to a nice quiet closet somewhere to sleep off the beatings... There's no point in being mean about things, you just have to curb that behavior problem!
-PatP
What pat said, but just for good measure, beat them again after the problem's sorted, then hose them down. ;)
How to trim XML's attribute values?
DataSet.WriteXML function. The result is as follow:
<DataSet xmlns="http://tempuri.org/DataSet1.xsd">
<TB TB_NO="NN01001" DESCRIPTION="NN " DATE="1/16/2007
12:00:00 AM" Name=" "
E_MAIL="test@.test.com">
<TD TD_NO="NN1-1 " LINE_NAME="FREE TRADE ZONE
" ADD1=" " TRSS="3" TYPEDESC="
" >
<DESCRIPTION SEQ_NO="001 " MARKS="RESOURCES " DSCP="
" />
<DESCRIPTION SEQ_NO="002 " MARKS=" " DSCP="
" />
<DESCRIPTION SEQ_NO="003 " MARKS="P.O.:11111111 " DSCP="
" />
<DESCRIPTION SEQ_NO="004 " MARKS="MADE IN CHINA " DSCP="
" />
<DESCRIPTION SEQ_NO="005 " MARKS=" " DSCP="
" />
<DESCRIPTION SEQ_NO="006 " MARKS=" " DSCP="
" />
</TD>
</TB>
</DataSet>
Is there any setting to set it to generate xml file with trim space values
and result should output as :
<DataSet xmlns="http://tempuri.org/DataSet1.xsd">
<TB TB_NO="NN01001" DESCRIPTION="NN" DATE="1/16/2007 12:00:00 AM" Name=""
E_MAIL="test@.test.com">
<TD TD_NO="NN1-1" LINE_NAME="FREE TRADE ZONE" ADD1="" TRSS="3"
TYPEDESC="" >
<DESCRIPTION SEQ_NO="001" MARKS="RESOURCES" DSCP="" />
<DESCRIPTION SEQ_NO="002" MARKS="" DSCP="" />
<DESCRIPTION SEQ_NO="003" MARKS="P.O.:11111111" DSCP="" />
<DESCRIPTION SEQ_NO="004" MARKS="MADE IN CHINA" DSCP="" />
<DESCRIPTION SEQ_NO="005" MARKS="" DSCP="" />
<DESCRIPTION SEQ_NO="006" MARKS="" DSCP="" />
</TD>
</TB>
</DataSet>
In your SQL SELECT or stored procedure just use LTRIM(column_name) or
RTRIM(column_name) to remove unwanted spaces then the resulting dataset will
not have them for the WriteXML format.
So to correct your data below it would be something like:
SELECT
RTRIM(TB_NO),
RTRIM(DESCRIPTION)
...
"ABC" <abc@.abc.com> wrote in message
news:elMHZEiOHHA.4484@.TK2MSFTNGP02.phx.gbl...
>I have a program which generate a xml file from dataset using
> DataSet.WriteXML function. The result is as follow:
> <DataSet xmlns="http://tempuri.org/DataSet1.xsd">
> <TB TB_NO="NN01001" DESCRIPTION="NN " DATE="1/16/2007
> 12:00:00 AM" Name=" "
> E_MAIL="test@.test.com">
> <TD TD_NO="NN1-1 " LINE_NAME="FREE TRADE ZONE
> " ADD1=" " TRSS="3" TYPEDESC="
> " >
> <DESCRIPTION SEQ_NO="001 " MARKS="RESOURCES " DSCP="
> " />
> <DESCRIPTION SEQ_NO="002 " MARKS=" " DSCP="
> " />
> <DESCRIPTION SEQ_NO="003 " MARKS="P.O.:11111111 " DSCP="
> " />
> <DESCRIPTION SEQ_NO="004 " MARKS="MADE IN CHINA " DSCP="
> " />
> <DESCRIPTION SEQ_NO="005 " MARKS=" " DSCP="
> " />
> <DESCRIPTION SEQ_NO="006 " MARKS=" " DSCP="
> " />
> </TD>
> </TB>
> </DataSet>
>
> Is there any setting to set it to generate xml file with trim space values
> and result should output as :
> <DataSet xmlns="http://tempuri.org/DataSet1.xsd">
> <TB TB_NO="NN01001" DESCRIPTION="NN" DATE="1/16/2007 12:00:00 AM" Name=""
> E_MAIL="test@.test.com">
> <TD TD_NO="NN1-1" LINE_NAME="FREE TRADE ZONE" ADD1="" TRSS="3"
> TYPEDESC="" >
> <DESCRIPTION SEQ_NO="001" MARKS="RESOURCES" DSCP="" />
> <DESCRIPTION SEQ_NO="002" MARKS="" DSCP="" />
> <DESCRIPTION SEQ_NO="003" MARKS="P.O.:11111111" DSCP="" />
> <DESCRIPTION SEQ_NO="004" MARKS="MADE IN CHINA" DSCP="" />
> <DESCRIPTION SEQ_NO="005" MARKS="" DSCP="" />
> <DESCRIPTION SEQ_NO="006" MARKS="" DSCP="" />
> </TD>
> </TB>
> </DataSet>
>
>
sql
How to trim XML's attribute values?
DataSet.WriteXML function. The result is as follow:
<DataSet xmlns="http://tempuri.org/DataSet1.xsd">
<TB TB_NO="NN01001" DESCRIPTION="NN " DATE="1/16/2007
12:00:00 AM" Name=" "
E_MAIL="test@.test.com">
<TD TD_NO="NN1-1 " LINE_NAME="FREE TRADE ZONE
" ADD1=" " TRSS="3" TYPEDESC="
" >
<DESCRIPTION SEQ_NO="001 " MARKS="RESOURCES " DSCP="
" />
<DESCRIPTION SEQ_NO="002 " MARKS=" " DSCP="
" />
<DESCRIPTION SEQ_NO="003 " MARKS="P.O.:11111111 " DSCP="
" />
<DESCRIPTION SEQ_NO="004 " MARKS="MADE IN CHINA " DSCP="
" />
<DESCRIPTION SEQ_NO="005 " MARKS=" " DSCP="
" />
<DESCRIPTION SEQ_NO="006 " MARKS=" " DSCP="
" />
</TD>
</TB>
</DataSet>
Is there any setting to set it to generate xml file with trim space values
and result should output as :
<DataSet xmlns="http://tempuri.org/DataSet1.xsd">
<TB TB_NO="NN01001" DESCRIPTION="NN" DATE="1/16/2007 12:00:00 AM" Name=""
E_MAIL="test@.test.com">
<TD TD_NO="NN1-1" LINE_NAME="FREE TRADE ZONE" ADD1="" TRSS="3"
TYPEDESC="" >
<DESCRIPTION SEQ_NO="001" MARKS="RESOURCES" DSCP="" />
<DESCRIPTION SEQ_NO="002" MARKS="" DSCP="" />
<DESCRIPTION SEQ_NO="003" MARKS="P.O.:11111111" DSCP="" />
<DESCRIPTION SEQ_NO="004" MARKS="MADE IN CHINA" DSCP="" />
<DESCRIPTION SEQ_NO="005" MARKS="" DSCP="" />
<DESCRIPTION SEQ_NO="006" MARKS="" DSCP="" />
</TD>
</TB>
</DataSet>In your SQL SELECT or stored procedure just use LTRIM(column_name) or
RTRIM(column_name) to remove unwanted spaces then the resulting dataset will
not have them for the WriteXML format.
So to correct your data below it would be something like:
SELECT
RTRIM(TB_NO),
RTRIM(DESCRIPTION)
...
"ABC" <abc@.abc.com> wrote in message
news:elMHZEiOHHA.4484@.TK2MSFTNGP02.phx.gbl...
>I have a program which generate a xml file from dataset using
> DataSet.WriteXML function. The result is as follow:
> <DataSet xmlns="http://tempuri.org/DataSet1.xsd">
> <TB TB_NO="NN01001" DESCRIPTION="NN " DATE="1/16/2007
> 12:00:00 AM" Name=" "
> E_MAIL="test@.test.com">
> <TD TD_NO="NN1-1 " LINE_NAME="FREE TRADE ZONE
> " ADD1=" " TRSS="3" TYPEDESC="
> " >
> <DESCRIPTION SEQ_NO="001 " MARKS="RESOURCES " DSCP="
> " />
> <DESCRIPTION SEQ_NO="002 " MARKS=" " DSCP="
> " />
> <DESCRIPTION SEQ_NO="003 " MARKS="P.O.:11111111 " DSCP="
> " />
> <DESCRIPTION SEQ_NO="004 " MARKS="MADE IN CHINA " DSCP="
> " />
> <DESCRIPTION SEQ_NO="005 " MARKS=" " DSCP="
> " />
> <DESCRIPTION SEQ_NO="006 " MARKS=" " DSCP="
> " />
> </TD>
> </TB>
> </DataSet>
>
> Is there any setting to set it to generate xml file with trim space values
> and result should output as :
> <DataSet xmlns="http://tempuri.org/DataSet1.xsd">
> <TB TB_NO="NN01001" DESCRIPTION="NN" DATE="1/16/2007 12:00:00 AM" Name=""
> E_MAIL="test@.test.com">
> <TD TD_NO="NN1-1" LINE_NAME="FREE TRADE ZONE" ADD1="" TRSS="3"
> TYPEDESC="" >
> <DESCRIPTION SEQ_NO="001" MARKS="RESOURCES" DSCP="" />
> <DESCRIPTION SEQ_NO="002" MARKS="" DSCP="" />
> <DESCRIPTION SEQ_NO="003" MARKS="P.O.:11111111" DSCP="" />
> <DESCRIPTION SEQ_NO="004" MARKS="MADE IN CHINA" DSCP="" />
> <DESCRIPTION SEQ_NO="005" MARKS="" DSCP="" />
> <DESCRIPTION SEQ_NO="006" MARKS="" DSCP="" />
> </TD>
> </TB>
> </DataSet>
>
>
How to trim off the time part of a DateTime field
The data is migrate from another databse,
There is a field called birthday, it stores the data with the formate like:
1994/11/03 AM 02:09:00
When I bind the field to datagrid, how can I suppress the time part , or
how can I trim the time part form database?Hi,
You can try as
select convert(varchar(10), getdate(), 102)
thanks and regards
Chandra
"ad" wrote:
> I have a tabls of SQL 2000, about the data of student.
> The data is migrate from another databse,
> There is a field called birthday, it stores the data with the formate like
:
> 1994/11/03 AM 02:09:00
> When I bind the field to datagrid, how can I suppress the time part , or
> how can I trim the time part form database?
>
>|||You can also check for available types on
2f3o.asp" target="_blank">http://msdn.microsoft.com/library/d...br />
2f3o.asp
thanks and regards
Chandra
"Chandra" wrote:
[vbcol=seagreen]
> Hi,
> You can try as
> select convert(varchar(10), getdate(), 102)
> thanks and regards
> Chandra
>
> "ad" wrote:
>|||I have study the article you mentioned, but I still cann't figure out how to
do.
If the field name is Birthday, could you give me a exmaple?
"Chandra" <Chandra@.discussions.microsoft.com> glsD
:BD3DB546-9F3D-491B-8596-6F2E0ABCA705@.microsoft.com...
> You can also check for available types on
>
http://msdn.microsoft.com/library/d..._ca-co_2f3o.asp[v
bcol=seagreen]
> thanks and regards
> Chandra
> "Chandra" wrote:
>
like:[vbcol=seagreen]
or[vbcol=seagreen]|||E.g.
SELECT convert(varchar(10), birthday, 102) FROM YourTableName
Regards
Steen
ad wrote:[vbcol=seagreen]
> I have study the article you mentioned, but I still cann't figure out
> how to do.
> If the field name is Birthday, could you give me a exmaple?
>
> "Chandra" <Chandra@.discussions.microsoft.com> glsD
> o_2f3o.asp" target="_blank">http://msdn.microsoft.com/library/d... />
o_2f3o.asp