Hi Friends ,

Sometime back, there was a request on the MSDN forums where the OP wanted to archive the replicated partitioned tables in the publisher database . I think it would be good to share the solution with everyone in this forum as well .I have used SQL Server 2008 Evaluation Edition SP1 , Build 10.0.2531 for my testing .
In this post we will see :

Part 1) archiving the replicated non-partitioned (normal) tables .
Part 2) archiving the replicated partitioned tables .

At the end ,you will notice one nice to know feature of partitioned tables .

Part 1) Archiving the replicated non-partitioned (normal) tables .
Publisher : DB2Migration
Subscriber : DB2Migration_Sub
Replication Topology : Transactional Replication
Articles : dbo.Test
Other details : Both tables have 10000 rows each after first synchronization.


Now we need to archive the test table in publisher database but want to keep the Subscriber untouched i.e. the rows in subscriber should not change .For this example we will delete all the rows of the publisher table .

How should we do it ?
If I delete any row on publisher , the same will be replicated to subscriber . One way might be to stop the log reader agent and then delete the rows .After this , I can fire sp_repldone on the publisher and start the logreader agent .yes this is perfectly achievable . Here we go …



DBCC opentran will show: 

No active open transactions.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

We will enable the LogReader agent now and will see that there are no transactions to be replicated.


Select count * from both the tables will show 0( zero ) and 10000 rows respectively .

Select count * from both the tables will show 0( zero ) and 10000 rows respectively .

After this we will insert 1000 rows in publisher table ( we need to be careful as the tables have primary key).As a result the subscriber now, has 11000 rows and publisher has 1000 rows .This is going to be costly when there will be millions of rows because delete (or update or insert )is a logged activity.There is one more drawback and perhaps more critical.The log reader agent is one per database .So if there are more than one publications on the same database and we run sp_repldone , then we will hurt other subscriptions and publications .So we have to be careful. Other way is to truncate the table (after moving the data to an archived table) but replicated tables cannot be truncated (why ???…..simple , Truncating a table is logged the deallocation of the pages but the individual records are not logged as deletes. Metadata consistency is not maintained between publisher and subscriber, so there is no way for replication to know whether the allocated pages all contained the same data).So to truncate the table ,you need to remove the article from the publication .If you want to do that the steps are :

1) Stop the log reader agent and distribution agent
2) Drop the article(s) from the publication
3) Archive the table to another table ( this will be a logged activity ) by Bulk insert or BCP or import export wizard
4) Truncate the table 5) Add the article again
6) Change the properties of all the articles in the publication properties to “Keep existing object unchanged” for option action “if name is in use ” .This is the most important step and please cross check it a few times to make sure that “Keep existing object unchanged ” is set
6) generate the snapshot again ….
7) start the log reader agent and distribution agent and initiate the new snapshot ….

Trust me .you are done :) ..But don’t you think its lengthy and a bit risky ..Now lets see something new …

Part 2) Archiving the replicated partitioned tables .
Let us first create 2 new databases followed by creating partition functions followed by partition schemes followed by partitioned tables followed by inserting data in the tables .

After this, create the publication on database TEST and add the article TEST. Once It’s done, the Test_Pub Publication is ready to publish. After this we will create the Subscription on this publication .Our subscriber database is test_sub.Once the initial snapshot is synchronized you will see the following values:


So as of now everything is as per plan . The data is synchronized in the correct partitions .Now if we need to archive the publisher table we can try the same old approach that we used in stage 1 .However , we will try something new here .That something new is SWITCHING OF PARTITIONS in the table .I will not explain what does it means because you will see it in few seconds(or you can refer BOL) .Lets first create the archive table on publisher .It’s the replica of the original test table

Msg 21867, Level 16, State 1, Procedure sp_MStran_altertable, Line 259
ALTER TABLE SWITCH statement failed. The table ‘[dbo].[test]’ belongs to a publication which does not allow switching of partitions

Oopsss , What happened…Yes , that is true , we cannot switch the partitions for the replicated table , unless ……….we explicitly allow partition switching for the publisher :

You will get this message

The publication has updated successfully.

Switching the partitions from test to test_archive table on publication. This is for partition number 1 . We have 4 partitions.

Now , just check the number of rows in the tables test and test_archive on publisher test and the test table on subscriber test_sub:


Thats the magic :)..Did you also notice that we did not create any partitions for the test_archive table .Lets query the syspartition DMV and notice the partitions in test_archive table.


Thats the beauty .You did not have to delete or truncate a single row.Nor did you remove the article or stopped any agent . Now if you add rows to table TEST it will be replicated to subscriber as usual .lets try this by inserting 1000 rows in table test in the publisher and then check the subscriber table.

inserting new values in test table ( in publisher database )


Hope you have found this post helpful .



Abhay Chaudhary

Like us on FaceBook Follow us on Twitter

Follow me on FaceBook | Join the fastest growing SQL Server group on FaceBook