Friday, March 30, 2012

row-locking question

i'm considering implementing my own artificial key table that would
work similarly to an identity value. insert transactions would first
have to visit this artificial key table, lock the artificial key row
defining the key needed by the data row to be inserted. after the table
lock, it would get the value, insert the record, increment the value,
and update the artificial key table.
the reason i am doing this in a user-defined fashion instead of using
an identity column are varied. first, sometimes the sequence needs to
be semi-random. also, i want to avoid all the issues with migrating
identity columns.
so, i have two questions:
1) assuming that the code and locking are written the the appropriate
levels of granularity, integrity, and speed, is this a reasonable thing
to attempt?
2) all the books i am reading say "trust SQL Server locking management"
-- but i'm guessing they didn't have this kind of use in mind. is this
a circumstance where manual transactional locking is appropriate?
thanks for any suggestions / advice,
jasonI've given this a lot of thought, because IDENTITY doesn't provide a
database-wide unique value and the size of ROWGUIDs hinder performance. I
had considered the possibility of two user-defined functions, one scalar
function that returns a single surrogate value, and one table-valued
function that returns a specified number of values. The main problem with
this scheme is the issue of locking. Either you can obtain the surrogates
prior to starting a transaction, or you have to find another way to deal
with the locks and blocking, or you set up separate identity ranges for each
table. What is needed is a way to serialize access to a "next key" table
outside of the calling transaction. I've thought about writing an extended
stored proceedure or using the sp_OA procs to do this because they can open
or use a separate shared connection, but I haven't had time to pursue the
issue futher, nor am I convinced that the performance will be satisfactory.
Extended procedures are deprecated in SQL Server 2005 because it hosts the
CLR, so anything I write now will probably have to be rewritten later.
"jason" <iaesun@.yahoo.com> wrote in message
news:1122929530.893879.301170@.g44g2000cwa.googlegroups.com...
> i'm considering implementing my own artificial key table that would
> work similarly to an identity value. insert transactions would first
> have to visit this artificial key table, lock the artificial key row
> defining the key needed by the data row to be inserted. after the table
> lock, it would get the value, insert the record, increment the value,
> and update the artificial key table.
> the reason i am doing this in a user-defined fashion instead of using
> an identity column are varied. first, sometimes the sequence needs to
> be semi-random. also, i want to avoid all the issues with migrating
> identity columns.
> so, i have two questions:
> 1) assuming that the code and locking are written the the appropriate
> levels of granularity, integrity, and speed, is this a reasonable thing
> to attempt?
> 2) all the books i am reading say "trust SQL Server locking management"
> -- but i'm guessing they didn't have this kind of use in mind. is this
> a circumstance where manual transactional locking is appropriate?
> thanks for any suggestions / advice,
> jason
>|||jason wrote:
> i'm considering implementing my own artificial key table that would
> work similarly to an identity value. insert transactions would first
> have to visit this artificial key table, lock the artificial key row
> defining the key needed by the data row to be inserted. after the
> table lock, it would get the value, insert the record, increment the
> value, and update the artificial key table.
> the reason i am doing this in a user-defined fashion instead of using
> an identity column are varied. first, sometimes the sequence needs to
> be semi-random. also, i want to avoid all the issues with migrating
> identity columns.
> so, i have two questions:
> 1) assuming that the code and locking are written the the appropriate
> levels of granularity, integrity, and speed, is this a reasonable
> thing to attempt?
> 2) all the books i am reading say "trust SQL Server locking
> management" -- but i'm guessing they didn't have this kind of use in
> mind. is this a circumstance where manual transactional locking is
> appropriate?
> thanks for any suggestions / advice,
> jason
You can do this, but try and keep the key value generation routine in a
separate transaction to prevent tying it to the new inserts themselves.
For example:
Begin Tran
Update dbo.keytest
Set NextKey = NextKey + 1
Select @.key = NextKey From dbo.keytest
Commit Tran
Insert Into dbo.MyTable (newkey) values (@.key)
David Gugick
Quest Software
www.imceda.com
www.quest.com|||Actually you can save a few steps by doing this below. Since the Update is
ATOMIC by itself you don't need a BEGIN TRAN - COMMIT and the extra select.
CREATE TABLE [dbo].[NEXT_ID] (
[ID_NAME] [varchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[NEXT_VALUE] [int] NOT NULL ,
CONSTRAINT [PK_NEXT_ID_NAME] PRIMARY KEY CLUSTERED
(
[ID_NAME]
) WITH FILLFACTOR = 100 ON [PRIMARY]
) ON [PRIMARY]
GO
CREATE PROCEDURE get_next_id
@.ID_Name VARCHAR(20) ,
@.ID int OUTPUT
AS
UPDATE NEXT_ID SET @.ID = NEXT_VALUE = (NEXT_VALUE + 1)
WHERE ID_NAME = @.ID_Name
RETURN (@.@.ERROR)
Andrew J. Kelly SQL MVP
"David Gugick" <david.gugick-nospam@.quest.com> wrote in message
news:u3zEmPvlFHA.3256@.tk2msftngp13.phx.gbl...
> jason wrote:
> You can do this, but try and keep the key value generation routine in a
> separate transaction to prevent tying it to the new inserts themselves.
> For example:
> Begin Tran
> Update dbo.keytest
> Set NextKey = NextKey + 1
> Select @.key = NextKey From dbo.keytest
> Commit Tran
> Insert Into dbo.MyTable (newkey) values (@.key)
>
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com|||This kind of thing is best done at the raw engine level. if you really
need it, switch products to something with row-level locking. Have you
looked at Firebird, Interbase and other optimistic concurrency control
databases?|||seems reasonable. decreases the lock time, and only costs me an ID
value if the insert fails for some reason. thanks.|||seems like a good implementation, thanks!|||no, i haven't looked at any alternative database products. this
database is thoroughly intwined with a growing .NET middleware, and i
think the company is looking quite enthusiastically at Sql Server 2005,
with the .NET platform built in.
Sql Server does claim to have row-level locking. they just say it is
good to trust the lock manager for most tasks. is Sql Server manual
row-level locking inadequate in some way?
thanks,
jason|||Hi
SQL Server 2000 and 2005 both do full row level locking in their default
configurations.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"jason" <iaesun@.yahoo.com> wrote in message
news:1122989683.046590.261270@.f14g2000cwb.googlegroups.com...
> no, i haven't looked at any alternative database products. this
> database is thoroughly intwined with a growing .NET middleware, and i
> think the company is looking quite enthusiastically at Sql Server 2005,
> with the .NET platform built in.
> Sql Server does claim to have row-level locking. they just say it is
> good to trust the lock manager for most tasks. is Sql Server manual
> row-level locking inadequate in some way?
> thanks,
> jason
>|||by the description of the task at hand, does this sound like something
i would have to employ manual locking to accomplish? or would beginning
a transaction with an update, and then commiting the transaction when
"safe" do all the locking i need?

No comments:

Post a Comment