Thursday, November 17, 2011

Quick Query to Look for Columns in a DB

 

Nothing fancy, just a quick query for ‘stab in the dark’ hunts for data.   I get that when you’re at the point that you’re hoping someone named a column helpfully, you’ve got bigger problems;  nevertheless I needed to throw it together.

Find a string in the list of column names in a database and return table and datatype information.

DECLARE @COLUMN_NAME_TEXT_TO_FIND NVARCHAR(255)
SET @COLUMN_NAME_TEXT_TO_FIND = 'SOMETHING'


select COL.NAME AS COLUMN_NAME, OBJECT_NAME(COL.ID) AS TABLE_NAME, XTYPES.NAME AS DATATYPE, UTYPES.NAME AS USER_DATATYPE, COL.PREC AS COLUMN_PRECISION
from SYS.SYSCOLUMNS COL
JOIN SYS.SYSTYPES XTYPES ON COL.XTYPE = XTYPES.XTYPE
JOIN SYS.SYSTYPES UTYPES ON COL.XUSERTYPE = UTYPES.XUSERTYPE
WHERE COL.NAME LIKE '%' + @COLUMN_NAME_TEXT_TO_FIND + '%'



With only user Data Types:




DECLARE @COLUMN_NAME_TEXT_TO_FIND NVARCHAR(255)
SET @COLUMN_NAME_TEXT_TO_FIND = 'SOMETHING'
select COL.NAME AS COLUMN_NAME, OBJECT_NAME(COL.ID) AS TABLE_NAME, UTYPES.NAME AS USER_DATATYPE, COL.PREC AS COLUMN_PRECISION
from SYS.SYSCOLUMNS COL
JOIN SYS.SYSTYPES UTYPES ON COL.XUSERTYPE = UTYPES.XUSERTYPE
WHERE COL.NAME LIKE '%' + @COLUMN_NAME_TEXT_TO_FIND + '%'




Friday, November 11, 2011

SharePoint 2010 Application Crawl Store DB Grows Out of Control

Note:*  I've been out of the SharePoint game a while, and not keeping up with its advances, there is definitely a way to manually clear the log from the GUI now.  I'm not sure if there is also a way to schedule grooming, other blogs would have more up to date info. 

Problem:  On SharePoint 2010, the Search Service Application Store DB grows out of control.  One table MSSCrawlURLLog is giant, and compared to the other tables, I mean really huge -- a couple of orders of magnitude bigger (MSSCrawlhostsLog may also be very large).

Reason:  Crawl logs are maintained for a very long time by default (it appears to be 30 days), there isn’t a way to configure this in the GUI.  Only errors are logged, and they’re logged with the frequency of the indexing crawl schedule.  This should be fine, but if you, for instance, have a pretty large farm and are migrating from 2007 to 2010, and you, say, installed the Fab40 templates, or have users who hardcoded URLs, these logs will get very large, very quickly.  There are a lot of postings about this issue, and the most recommended way to fix it is to reset the crawled content and re-crawl. With a large farm this is a search outage (search may be up, but you won’t get results until content is indexed), and takes a long time. I’m not crazy about this solution, but I haven’t found anything explicitly saying my fix is supported, so if in doubt, go with the reset.

One Fix:  In the Search Service Application Store DB (default name Search_Service_Application_StoreDB_funGUID), there is a procedure proc_MSS_CrawlReportCleanup.   It’s not super documented, and  I can’t find any way to call it through the GUI or PowerShell.  But it does exactly what you think it would do, and cleans up the offending tables.
Our environment is mirrored, so I wrapped calling the proc in the proc below that checks for mirroring primary first.  This shouldn’t be a problem on a standalone DB, but you could take it out for good measure if it doesn’t apply.  Pass the name of your crawl store database and the number of days you want to retain to the procedure.  It will generate a lot of IO if your table is growing a lot.  We run it off-peak.  Also, it’s not a bad idea to take a look at the error log and see if there are any errors that could be cleaned up since errors that don’t get logged, don’t have to be cleaned up.
USE [YOURUTILITYDB]

GO

/****** Object:  StoredProcedure [dbo].[PRUNE__SEARCH_CRAWL_STORE]    Script Date: 11/11/2011 16:29:13 ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO


CREATE PROCEDURE [dbo]
.[PRUNE_SEARCH_CRAWL_STORE]
   
    (
   
--Pass the name of your crawlstore database as CrawlstoreDBName default is Search_Service_Application_CrawlStoreDB_somecrazyguid
     @CrawlStoreDBName
nvarchar(1000),
   
--@pruntime sets the number of days of error history to retain
     @prunetime
int
  
)

AS
BEGIN

declare @cmd nvarchar
(2000)

set @cmd = @CrawlStoreDBName +'.dbo.proc_MSS_CrawlReportCleanup '

-- For mirrored databases, the proc/sqlAgent job should live on both servers, and only execute on the primary
--Check the current status of the database because the command will fail if executed on a mirror database
if(select m.mirroring_rolefromsys.databases djoinsys.database_mirroring mon d.database_id = m.database_idwhere d.name = @CrawlStoreDBName) = 1
begin
  
--If the database is primary, run the log prune job
      
exec @cmd @prunetime
end

END
GO



SQL 2008 R2 Mirroring. Add a Witness After the Fact

Just a quick note:
We ran into a situation with a database already running mirroring without witness/failover to which we wanted to add a witness.  To add a witness server after the fact, we ran the mirroring wizard which appeared to be successful.  Failover, however, didn’t work, and the log on the secondary was full of permissions errors.
If you run all SQL services as the same user across your primary/secondary/witness, you won’t see this error…I’m not suggesting that’s a good idea.  In our case all the SQL services run as dedicated domain accts per server, and running the mirroring wizard doesn’t add the witness server account to the secondary if mirroring without witness is already configured.

There are two ways to fix this:

  • The right way:  Figure out how to assign permissions to the mirroring endpoint for the witness server service account on the secondary (I gave up on this because cheating is so much easier, and it’s not something we have to repeat).
  • The Kind of Cheating Way I Fixed It:  Create a dummy DB on the primary and secondary (don’t worry about the real mirroring setup, you just need the permission wizard to run).  On the dummy DB, run the mirroring setup as you would normally for synchronous mirroring with witness.  Drop the dummy DB.  Re-run the mirroring setup to add a witness on the primary database.  Test failover.