- Split View
-
Views
-
Cite
Cite
Claire T. Kassakian, Saad Ajmal, Reginald Y. Gohh, Paul E. Morrissey, George P. Bayliss, Immunosuppression in the failing and failed transplant kidney: optimizing outcomes, Nephrology Dialysis Transplantation, Volume 31, Issue 8, August 2016, Pages 1261–1269, https://doi.org/10.1093/ndt/gfv256
- Share Icon Share
Abstract
There is little data to guide clinicians on the optimal management of immunosuppression in patients whose kidney transplant has failed and who have returned to dialysis. Nor is there robust data on whether to perform a transplant nephrectomy. Finally, management of late stage chronic kidney disease, including deciding on dialysis initiation, modality and access planning, must occur simultaneously with efforts aimed at preserving the failing kidney and residual renal function for as long as possible. In this article, we will review the evidence on these topics and suggest areas for improvement.
INTRODUCTION
The return to dialysis is among the difficult transitions that a patient whose transplanted kidney has failed must make. Sometimes it is permanent, and sometimes it is only temporary while waiting for another kidney. An important decision for the patient's nephrologist is whether and how much immunosuppression to continue, balancing the risk of rejection and the desire to avoid becoming further sensitized against the risk of infection. Further complicating the situation is the question of whether to remove the failed kidney, either according to the protocol or for cause (infection, rejection), and stop immunosuppression altogether. Unfortunately, there is little data to guide clinicians.
There is also little data to guide decisions as transplant patients in advanced chronic kidney disease (CKD) progress toward the need for dialysis. Decisions on whether to taper or discontinue immunosuppression can potentially have great bearing on outcomes, especially if the goal is for the patient to be retransplanted. Equally as important for outcomes is how well the patient is prepared for dialysis [1]. But discussions about modality and access planning, as well as about metabolic bone disease and anemia, often seem to be of secondary importance to efforts to preserve the failing transplant kidney for as long as possible and hold off on the return to dialysis [2].
And yet, the scope of the problem is significant. The number of patients returning to dialysis after renal allograft failure has remained stable at ∼5000 annually, or 4–5% of the incident dialysis population [3]. But only 15–16% of those patients receive another kidney transplant within a year [4]. This has significant implications for patient mortality. Patients with a failed allograft have a 42% survival rate compared with 75% for those with a functioning graft over a 10-year period [5]. Mortality for transplant failure patients on dialysis is twice as high compared with the risk for transplant-naive patients [6]. All-cause mortality is 32% higher for transplant failure patients on dialysis than for transplant-naive patients, with a hazard ratio for death from infection to 2.45 [7]. In a study of Canadian transplant failure patients, the risk of death increased to 5.14 from 2.06 per hundred patient-years following allograft failure with an adjusted hazard ratio (HR) of 3.39. These researchers concluded that their findings supported the premise that death was a result of loss of transplant function rather than related to patient characteristics (age, gender) or systems related issues (hemodialysis versus peritoneal dialysis, national health service) [8]. The causes for poorer survival of failed transplant patients on dialysis compared with those newly initiating dialysis are not entirely clear. Research has shown an increase in endothelial dysfunction, which the authors argued implied increased inflammation associated with elevated total serum cholesterol, elevated C-reactive protein (CRP), reduced serum albumin and reduced coronary artery flow rates—all considered precursors of cardiovascular events [9].
Here we will review the historic data on immunosuppression and nephrectomy in failed renal allografts and newer studies that have focused on transplant nephrectomy. We will then review the limited data on late-stage CKD management before the return to dialysis.
IMMUNOSUPPRESSION AND THE FAILED RENAL ALLOGRAFT
The options for managing immunosuppression in the failed transplant kidney are fairly straight forward as described in the literature: continue full immunosuppression, particularly if one plans to retransplant relatively soon after graft failure, taper immunosuppression if plans for repeat transplantation are more distant or in the event of infection, or stop immunosuppression entirely over some period of time if there are no plans to retransplant or in the event of infection. The second question is whether to remove the failed allograft and stop immunosuppression entirely. The arguments for stopping immunosuppression include reducing the risk of infection. The arguments for continuing immunosuppression include reducing the risk of rejection or increased sensitization. Arguments for removing the failed transplant include removing the need for immunosuppression entirely. The arguments against include the risk of surgery. Recent work, however, suggests that weaning immunosuppression may lead to formation of HLA antibodies and trigger late rejection, leading to nephrectomy in a subset of patients, and is therefore an independent predictor of alloantibody sensitization after failure of the transplanted kidney [10].
A survey of transplant practices found no consensus among US transplant centers on how to handle immunosuppression [11]. Most transplant centers responding to the survey said they did not have a set protocol, but left the decision up to the individual transplant physician. But they were more likely to retain control over the decision rather than ceding it to the nephrologist prescribing dialysis. More than two-thirds of respondents said they did not leave patients with failed kidneys on immunosuppression indefinitely, and roughly the same number said all of their patients were off all immunosuppression 1 year after returning to dialysis. Roughly 20% of respondents left patients on prednisone indefinitely. Respondents most often cited plans to retransplant or ongoing signs and symptoms of rejection as the most important consideration in deciding whether to continue or stop immunosuppression.
The lack of consensus reflects the lack of data. The initial studies on the risk of continuing immunosuppression were carried out in the Netherlands. In the first, researchers reviewed the charts of 37 patients who received 47 kidneys between 1975 and 1995 in one hospital. The relative risk of infection continuing on immunosuppression versus discontinuing was 14.2 [95% confidence interval (CI), 1.4–143.4; P < 0.025] for one infection and 6.8 (95% CI 1.1–1.73; P < 0.04] for two infections [12]. These same investigators and others found increased overall mortality (odds ratio 3.4, 95% CI 1.8–6.3; P < 0.0001) as well as deaths from both infection (OR 2.8; 95% CI 1.1–7.0; P = 0.03) and cardiovascular causes (OR 4.9, 95% CI 1.8–13.5; P = 0.001) in a multicenter review of 197 transplants between 1972 and 1996. The immunosuppressive medications were prednisone, azathioprine and in three patients cyclosporine. The study did not include data on how or over what period the patients were weaned. Rather it used data from patient charts to calculate time periods off immunosuppression and on immunosuppression and then compared results in the two time periods. They found no significant difference in the number of acute rejections in each group [13].
More recently, researchers in the US looked at the association of immunosuppression status and cause of fever. In 187 patients hospitalized for fever within 6 months of transplant failure, they found that only 38% of patients who had been weaned off immunosuppression before hospitalization had a documented infection while 88% of those who were maintained on immunosuppression had a documented infection as the source of the fever (P < 0.0001). Those who were weaned off immunosuppression before hospitalization were more likely to have a nephrectomy than those who remained on immunosuppression (P < 0.0001) [14]. Nephrectomy in patients without documented infection was usually done because of symptoms attributed to the allograft or because no source of infection could be found. In all patients who underwent nephrectomy without documented source of infection, the fever resolved after nephrectomy.
Several single-center chart reviews of data on immunosuppression at two institutions with similar transplant programs have been undertaken. In one chart review of failed transplant patients between 1999 and 2012, 72% of patients who continued immunosuppression experienced infections compared with 45% who were off immunosuppression (P = 0.24) with line infections only in the patients continued on immunosuppression (14 versus 0%, P = 0.13) [15]. In another small study, there did not appear to be an increased risk of sensitization between patients remaining on immunosuppression versus those taken off with 4.8% developing anti-HLA antibodies in the group on immunosuppression, 0% in those weaned from immune suppressive medications. No significance in survival of subsequent allografts was found comparing the three groups (immunosuppression stopped, immunosuppression continued, nephrectomy) [16]. The ability to draw conclusions is limited by the small sample size and single-center nature of the efforts. Even after expanding the numbers in the initial review to 110 patients with sufficient data to analyze, no significant relationship was found either between infection and immunosuppression stopped versus continued (79 versus 21%, P = 0.341) or infection and nephrectomy versus no nephrectomy (19 versus 81% P = 0.433) [17]. This suggests a small effect and need for larger numbers to show whether there is a significant difference between approaches.
NEPHRECTOMY AND THE FAILED RENAL ALLOGRAFT
The data for and against nephrectomy and whether there is a role for continuing immunosuppression are no less clear. Mortality rates from nephrectomy in the pre-cyclosporine era ranged from 7.3 to 26.3% according to one review, which reported a drop in mortality and complications after the introduction of cyclosporine. The authors cited their own experience of no deaths among 90 transplant nephrectomies out of 960 initial cadaveric renal transplants. They also reported a drop in serious complications to <10% [18].
The debate on nephrectomy has centered on whether leaving the failed kidney in helps prevent an increase in sensitization that could interfere with future kidney transplants. An argument for leaving the failed allograft in is that it prevents a rise in panel reactive antibodies (PRA) and reduces the incidence of delayed graft function in a subsequent kidney transplant [19]. But small studies in the era of cyclosporine and more recently in the tacrolimus age have suggested that even with the rise in PRA there was no significant effect on rates of retransplantation or subsequent patient and graft survival. In a small UK study of 89 patients who underwent retransplantation after initial allograft failure, the authors found that PRA levels significantly influenced subsequent graft survival irrespective of whether the patient had a nephrectomy or not [20]. In another series of 127 retransplant recipients in Texas, the authors compared 40 patients who underwent nephrectomy prior to retransplant and 40 patients who did not. More nephrectomies were performed earlier in the pre-cyclosporine (CSA) era, and more biopsies showed acute rejection in the nephrectomy group. While primary allograft nephrectomy was associated with higher levels of preformed antibodies, it had no effect on early function of the new graft, frequency of acute rejection or allograft outcomes in those on CSA [21].
More recent work has looked into the effects of the degree of sensitization pre-transplant, the timing of nephrectomy after transplant failure and whether to continue immunosuppression even after the failed graft has been removed. In one study of 91 nephrectomies, the authors found that unsensitized patients (PRA < 20%) had a significant PRA increase post nephrectomy while patients who were already highly sensitized (PRA > 80%) had a significant but small decrease in PRA [22]. The timing of nephrectomy may also affect the development of antigenicity. Early nephrectomies—within 6 months of allograft failure in this study-were associated with higher PRAs than were nephrectomies after 6 months.
In a registry review of data from the United States Renal Data Service (USRDS) Johnson and colleagues looked at the use and consequences of transplant nephrectomy among 19 107 transplant failure patients between 1995 and 2003. They found that among 3707 patients with graft survival <12 months (early allograft failure), nephrectomy was performed in 56% of patients and was associated with an increased risk of death (HR 1.13, 95% CI 1.01–1.26). Among the 15 400 people with graft survival of 12 months or more (late allograft failure), nephrectomy was performed in 27% and was associated with a decreased risk of death (HR 0.89, 95% CI 0.83–0.95). In the early nephrectomy group, those who received a second transplant saw a reduced risk of repeat transplant failure (HR 0.72, 95% CI 0.56–0.94) while those in the late nephrectomy group who received a second transplant faced an increased risk of repeat transplant failure (HR 1.20, 95% CI 1.02–1.41) compared with those who did not undergo transplant nephrectomy. The authors drew no conclusions from this observational study [23].
Ayus et al. [24] in a separate review of USRDS data showed improved survival with transplant nephrectomy following a failed renal allograft. They examined data on 10 951 patients who received a kidney transplant between 1994 and 2004 and who returned to dialysis. Of those, 3451 received an allograft nephrectomy. Overall, 34.6% of patients died during follow-up. Receiving an allograft nephrectomy was associated with a 32% lower adjusted relative risk of all-cause death (HR 0.68, 95% CI 0.63–0.74). Data were adjusted for socio-demographic characteristics, burden of comorbid disease, donor characteristics and clinical conditions associated with and propensity to receive an allograft nephrectomy.
An argument in favor of nephrectomy has been that it allows patients to come off immunosuppression. Another may be the association of the failed kidney with a chronic inflammatory state. Patients who underwent nephrectomy after returning to hemodialysis experienced improvement in serum albumin levels and hemoglobin after 6 months on dialysis compared with patients whose failed transplants remained in place after returning to dialysis and those who were dialysis naïve at the time of transplant [25].
But new data have called the utility of nephrectomy as a way to come off immunosuppression into question. Del Bello et al. [26] showed that stopping immunosuppression after nephrectomy may lead to increased sensitization, both HLA and donor-specific antibody (DSA), even in early nephrectomy. Some have suggested a role for improved T-cell proliferation and activation upon return to dialysis as a possible mechanism [27].
A survey of US transplant centers asked about their practices in performing nephrectomies. Most respondents said that they performed nephrectomy most often for ongoing signs and symptoms of rejection (47%) or if signs and symptoms of rejection failed to respond to high doses of corticosteroids (34%). Another 12% of respondents said that they performed nephrectomies if the graft failed within 1 year of transplantation. Finally, 4% of respondents said they performed nephrectomies on all failed renal allografts [11].
The authors carried out a retrospective review of all nephrectomies performed at Rhode Island Hospital between 1997 and 2012. Out of a total of 993 kidney transplants, there were 172 failed kidney transplants with 53 transplant nephrectomies [28]. Patients were significantly more likely to receive a nephrectomy in the event of vascular thrombosis or non-compliance with immunosuppression regimens. Patients with chronic rejection were significantly less likely to undergo nephrectomy. There were significant increases in PRA at the time of transplant failure in both the nephrectomy and non-nephrectomy groups. There was no significant difference in PRAs at the time of last follow-up on the waiting list between the two groups, and there was no significant difference in number of patients in each group who were highly sensitized. There was no significant difference in retransplantation rates (P = 0.226 at 4 years), immediate or longer term failure of the subsequent allograft (P = 0.1938 and P = 0.232, respectively) or patient survival following retransplantation (0.236). Thirty-day mortality was 3.7% while perioperative morbidity was 28.7%, with hematoma, wound infection and abscess being the three leading causes of complications.
THE FAILING RENAL ALLOGRAFT
While there is still no consensus on how to manage immunosuppression and nephrectomy in the failed allograft kidney, there is a growing body of work on optimizing management of the late transplant kidney.
The causes of allograft failure are varied. A retrospective review of 1365 transplant kidney biopsies found specific causes in 69.4% of cases but no cause in 30.6% in the last biopsy before graft failure. Interstitial fibrosis and tubular atrophy as the sole cause was present in only 6.9% of the cases. The majority of cases showed that graft failure was multifactorial, combining features of T-cell mediated rejection, acute antibody mediated rejection, de novo or recurrent glomerulopathy and polyomavirus-associated nephropathy. The authors argued that evidence of chronic histologic damage should be taken into account in treatment algorithms [29].
There is robust data on maintaining the residual urine output in the native kidney after initiation of peritoneal dialysis (PD) and hemodialysis [30, 31]. While there is no direct data on survival benefit of maintaining residual renal function in patients with a failed transplant, some have argued that immunosuppression should be continued even after allograft failure to preserve the urine output. In a decision analysis, again based on CANUSA data, researchers using a Markov model showed a survival benefit of 5.8 years continuing immunosuppression after return to PD compared with 5.3 years when immunosuppression was discontinued. The benefit held for all glomerular filtration rates (GFR) >15 L/week. According to this model, the risk of death was reduced by 12% for every 5 L/week per 1.73 m2 of residual function [32] The authors made two assumptions in their model: the survival benefit after graft failure and return to PD was the same as that for a native kidney; and the risk of carcinoma and opportunistic infection was the same as in the general population if immunosuppression were withdrawn.
While there is an established risk of death in returning to dialysis after failure of a transplant, the transition back to dialysis also presents an increased risk of mortality. In a review of USRDS data, researchers found that the risk of death in the peritransplant period compared with the initial return to dialysis was 8.2/100 patient-years and 17.9/100 patient-years, respectively. Both transitions demonstrate increased mortality compared with 6.4/100 patient-years while wait-listed for an initial transplant. Overall death rates after return to dialysis were higher with hemodialysis in this study [33]. There did not appear to be a graft survival advantage in pre-emptive transplantation of patients with failing allografts, according to a retrospective study of USRDS data. But there was an increase in risk of patient death with increased waiting time [34].
A significant source of peritransplant morbidity may well be dialysis access. As with residual renal function, much of the data about transplant patients returning to dialysis is extrapolated from transplant-naive patients. The risk of infection-related mortality from central venous catheter is increased by 131% in the transplant-naïve population. And mortality is higher for incident hemodialysis patients starting with indwelling central catheters than for incident patients on peritoneal dialysis [35]. The risk of infection in patients is highest in those with central venous catheters, and transplant failure patients are less likely to have permanent vascular access in the first 3 months after returning to dialysis [7].
In addition to being less likely to have vascular access, transplant patients returning to dialysis when compared with transplant-naive patients were less likely to have a serum albumin >4 g/dL and more likely to have parathyroid hormone (PTH) > 500 pg/dL. Their initial dialysis adequacy was more likely to be greater than that of transplant-naive patients, but by 3 months, failed transplant patients had lower dialysis adequacy as well as lower hemoglobin and continued lower rates of fistula and graft placement and albumin. PTH remained elevated >500 pg/dL [7]. The authors did not specify whether the drop in adequacy reflected a loss of residual renal function.
As with transplant-naive patients starting dialysis, the optimal timing of return to dialysis in the patient with the failed transplant remains uncertain. Two small European studies seemed to differ on whether transplant failure patients returned to dialysis too late. In one small comparison of 74 failed transplant patients and 194 incident patients in Spain, there was no significant difference in GFR (9.4 versus 8.7 mL/min) [6]. A small study in France concluded that creatinine clearance deteriorates rapidly in the final three months of useful allograft function and that failed transplant patients were more likely to return to dialysis with a creatinine clearance of <10 mL/min [36].
The optimal GFR at which failed transplant patients should return to dialysis remains uncertain. A review of data from the Scientific Registry of Transplant Recipients (SRTR) suggested that earlier initiation of dialysis (eGFR > 10 mL/min) leads to worse outcomes even among the healthiest and younger patients as well as women [37]. Data on the timing of re-initiation of dialysis in transplant patients parallel nephrologists' uncertainty on the optimal time to initiate dialysis in transplant-naïve patients. Lack of access planning and placement and adherence to guidelines on bone metabolism, and nutrition reflect a need for greater attention to guidelines on CKD even while focusing on preserving renal function in the allograft as long as possible.
There does not appear to be an optimal dialysis modality for patients with failed allografts. A Canadian study of registry data from 2110 adult patients who initiated dialysis after allograft failure between January 1991 and December 2005 found no overall difference in survival between patients on hemodialysis and patients on PD (HR HD:PD 1.05; 95% CI 0.85 to 1.31) [38]. 55% of the patients who resumed PD after graft failure had been on PD before transplant. PD patients were younger and more likely to be women as well as to live farther from dialysis units than patients on hemodialysis.
The authors had postulated that initial survival would be greatest in the PD group because of preservation of residual renal function and that the advantage would diminish over time as residual renal function diminished. But they reported that their data showed no initial survival advantage or any relative change over time. They suggested that this might be due either to delayed initiation of return to dialysis at a lower GFR after residual renal function had already fallen or because there is an accelerated loss of residual renal function in patients returning to dialysis after graft loss compared with that experienced by transplant-naive patients [37].
A more recent review of French registry data suggested equal survival between transplant failure and transplant naïve patients. It compared survival between failed transplant patients under age 65 who returned to dialysis between 2007 and 2009 with a cohort matched according to age, gender, diabetes mellitus and year of starting dialysis. Of the 911 failed transplant patients who returned to dialysis, 103 had died by 1 January 2011, the cut-off point for the survival analysis. Significant predictors of death on multivariate analysis were age over 48 years, the presence of coronary artery and peripheral artery disease and an inability to walk. 778 failed transplant patients met criteria to enter the case control analysis. Failed transplant patients were found to have significantly lower body mass index and serum albumin than controls, to be more likely to have hemoglobin <10.6 g/dL and to restart dialysis at a lower serum creatinine. Dialysis modality was also significantly different with transplant failure patients less likely to do peritoneal dialysis (3.34 versus 19.02%, P < 0.001). But there was no difference in survival at 1, 2 and 3 years (log rank P = 0.197 overall). And there was no significant difference in mortality between the two groups [39].
DISCUSSION
Patients with a failed or failing kidney transplant represent a particular challenge for both the transplant and general nephrologist. Even as the goal is to retain residual renal function as long as possible, provision must be made for the return to dialysis. Whether to continue immunosuppression (which drugs and at what level) or to perform a nephrectomy of the failed graft seem to be decisions based on each patient's circumstances especially comorbid conditions, infections or plans to retransplant. The British Transplantation Society recently published its own set of guidelines for managing the failing kidney transplant, mostly based on evidence it described as of low to very low quality [40]. They suggested that immunosuppression be reduced in the late stages of graft dysfunction, either targeting low levels of calcineurin inhibitors or withdrawing them completely. They recommended weighing the relative risk of maintaining immunosuppression after return to dialysis and relisting for a new kidney against the risk of new allosensitization from stopping it. They recommended stopping all immunosuppression except for steroids immediately after transplant nephrectomy, followed by a gradual withdrawal of steroids. In the event of rejection after immunosuppression is withdrawn, they recommended steroid therapy followed by nephrectomy once inflammation has resolved. We have proposed a similar protocol for weaning immunosuppression over time in selected patients (Table 1).
|
|
|
|
Others have proposed weaning regimens in which the antimetabolite is stopped at the initiation of dialysis, calcineurin inhibitors are tapered over 4–6 weeks and steroid dose is maintain for 2–4 weeks after initiation of dialysis and then tapered by 1 mg/month, starting from 5 mg [41].
The debate on calcineurin inhibitor toxicity and weaning strategies in late allograft function must be considered. The survey of US transplant center practices found that there was no preference for weaning antimetabolites or calcineurin inhibitors first after transplant failure, but that the bulk of respondents weaned patients off prednisone last [11]. Much of the literature surrounding weaning of immunosuppression touches on the risk of rejection in the functioning organ, not in the failing allograft. The literature has been clear that late withdrawal of steroids increases the risk of antibody [42] and cellular rejection [43].
Literature on the withdrawal of calcineurin inhibitors has concentrated on preservation of GFR in functioning allografts. A 5-year randomized prospective trial compared outcomes on a regimen of cyclosporine, mycophenolate mofetil and prednisone versus mycophenolate mofetil and prednisone. The mycophenolate mofetil group showed a trend toward improved creatinine clearance (67.4 versus 61.7 mL/min; P = 0.500). Withdrawal of cyclosporine from an MMF-containing immunosuppression regimen resulted in an increased risk of acute rejection (P = 0.0283) and graft loss as a result of rejection over the five years (P = 0.0101) [44].
Data on tacrolimus withdrawal specifically has also largely come from studies looking at withdrawal of the drug in functioning allografts as a way to reduce calcineurin inhibitor toxicity. In a randomized prospective trial of tacrolimus withdrawal in immune-quiescent kidney-transplant patients, high rates of antibody mediated acute rejection and de novo DSA formation forced an early end to the trial. The researchers concluded that the risk of rejection outweighed any benefit from tacrolimus withdrawal in patients who are receiving standard of care immunosuppression [45]. Others looking at late tacrolimus withdrawal in stable functioning allografts found a more favorable blood pressure effect than that achieved with mycophenolate mofetil withdrawal over 3 years of follow-up but no difference in intima media thickness [46].
Debate about leaving the failed kidney in versus taking it out is far from settled, but evidence that the failed kidney increases chronic inflammatory states has accumulated over the last decade. Lopez-Gomez et al. followed 43 patients with failed renal transplants on dialysis and compared them with 121 incident dialysis patients. They found significantly lower hemoglobin and serum albumin and higher CRP levels and weekly erythropoietin usage in the failed transplant group versus the incident dialysis patients. They then broke the failed transplant group into 29 who had nephrectomy and 14 who did not. At 6 months after surgery, the nephrectomy group had significantly higher hemoglobin (12.7 ± 1.1 versus 10.9 ± 1.4, P < 0.005), higher serum albumin (3.9 ± 0.6 versus 3.3 ± 0.4, P < 0.001), and lower CRP levels (0.9 ± 0.5 versus 3/6 ± 6.0, P < 0.001) versus the non-nephrectomy group. The weekly dose of recombinant human erythropoietin in the nephrectomy group was a little more than half that in that the group with the retained allograft (6925 units ± 3173 units versus 12 714 units ± 8693 units, P < 0.005). There was no significant difference in ferritin levels or in the transferrin saturation index between the two groups, suggesting that the difference in response to erythropoietin was not the result of ineffective or low iron stores. Indeed the nephrectomy group saw a significant increase in hemoglobin to 12.7 ± 1.1 six months after surgery from 9.8 ± 1.8 at baseline (P < 0.001) [26].
In a further analysis of the 14 patients who did not undergo nephrectomy, it was clear that there was no improvement in hemoglobin, albumin or erythropoietin use from baseline to 6 months. While none of the group showed signs of rejection at first, three patients developed a delayed inflammatory response: two required nephrectomy and one died. The authors concluded that graft intolerance syndrome is a dangerous entity and that elective early nephrectomy by trained surgeons was the safest option for treatment and prevention of chronic inflammation from the failed graft [47]. Others have proposed percutaneous embolization of failed allografts in patients with graft intolerance syndrome as a less invasive first option [48].
The failed allograft may suppress appetite and contribute to malnutrition through inflammatory mechanisms, In a comparison of 56 patients with failed allografts back on dialysis with 77 transplant-naïve patients initiating dialysis, researchers found higher levels of the appetite-stimulating hormone ghrelin, high specificity CRP, interleukin 6 and tumor necrosis factor-α in the failed transplant group and lower levels of albumin. Serum levels of leptin, a hormone that inhibits food intake, were similar [49]. The authors hypothesized that the elevated ghrelin level, representing total levels and not the active acetylated form, correlated with low body mass index and high levels of inflammation.
The argument that immunosuppression should be continued to preserve residual renal function is one of several that transplant nephrologists caring for patients with failed renal transplants make in favor of continuing immunosuppression at some level, but it is not a major reason [11]. Research continues to show a survival benefit in maintaining some degree of residual renal function compared with complete loss of GFR [50]. But whether that advantage extends to patients with failed allografts is clear only from statistical modeling but not from clinical trials. Others have shown that return to PD after graft loss was not a predictor of decline in residual renal function. Nor did patients whose immunosuppression was weaned slowly show any shortening of time to first episode of peritonitis. These authors concluded that PD offered the same preservation of residual renal function to those with a failed graft and those who were transplant naive [51].
Still others have found that higher residual renal function in failed kidney transplant patients returning to dialysis was a predictor of more rapid loss of residual function. The authors of this study, which involved 45 patients returning to dialysis from a failed allograft, followed no particular protocol for managing immunosuppression. Steroids were withdrawn over weeks to months, azathioprine was stopped and calcineurin inhibitors were prescribed at varying doses. Some of the patients underwent nephrectomy, but the number of nephrectomies and whether immunosuppression was stopped in patients whose allograft was removed was not specified [52].
The risk of infection and neoplasm from continuing immunosuppression after allograft failure must be considered. Small single-center studies found a trend toward more catheter infections in patients continued on immunosuppression compared with those in whom it was stopped. Higher incidences of Clostridium difficile infection (7 versus 0%) infectious endocarditis (3 versus 0%), pyelonephritis (7 versus 0%), aspirgilloma (3 versus 0%) and septic arthritis (3 versus 0%) were found in patients on versus off immunosuppression. But the differences were not significant [15, 17]. There is a growing body of literature about BK nephropathy in the failed transplant and the risk it confers for subsequent transplant outcomes. Case reports have described successful pre-emptive re-transplantation of patients with BK nephropathy and simultaneous nephrectomy to remove the failed transplant, which was considered a reservoir of the virus. Immunosuppression was adjusted but not stopped entirely [53]. In a larger case series, researchers suggested that repeat transplantation was safe for people who had suffered allograft loss because of BK nephropathy, but only after the virus was cleared. This group was also less likely to experience BK viral replication after retransplantation. Creatinine was also higher at 1 year post retransplantation in the group that experienced BK virus replication. In this series of 31 patients, 13 had allograft nephrectomy, with three experiencing recurrence of BK viremia. Eight of the 18 patients who did not undergo nephrectomy experienced replication of BK virus [54].
Preventing sepsis in the patient with a failed transplant may improve mortality, according to a review of registry data and Medicare claims in the US. The overall rate of sepsis in 5117 patients who initiated dialysis after transplant failure was 11.8 per 100 patient-years. Patients age 60 or older, obese patients, patients with diabetes and patients with a history of peripheral vascular disease or congestive heart failure were at the higher risk of sepsis, as were women and people who went on hemodialysis after transplant failure. Transplant nephrectomy did not confer a higher risk of sepsis. The sepsis rate in transplant failure patients in the first three to six months after failure was higher than for transplant recipients or incident dialysis patients in the first six months. The review did not address the role of continued immunosuppression or vascular access creation. Overall, patients with failed transplants had a significantly higher risk of death from sepsis compared with those who did not develop sepsis (HR 2.93; 95% CI 2.64–3.24; P < 0.001) [55].
Some of the risk of cancer in transplant populations may be related to infection. Researchers examined registry data from Australia and New Zealand to see whether reduction of immunosuppression had any effect on development of cancer after allograft failure. Using transplant and cancer registry data, they reviewed files for 8173 people who had kidney transplants between 1982 and 2003. They found that the standardized incidence ratio of cancers related to infections like Kaposi's sarcoma and non-Hodgkin's lymphoma, which are associated with human herpes virus 8 and Epstein Barr virus, respectively, and anogenital, oral cavity and oropharyngeal cancer, which are associated with human papilloma virus, were significantly higher during periods of allograft function than after allograft failure and reduction in immunosuppression. Standard incidence ratios for leukemia, and lung cancer remained high after allograft failure, while standard incidence ratios for kidney, urinary tract and thyroid cancers increased after allograft failure [56].
The risk of patients becoming highly sensitized or suffering acute rejection necessitating emergent nephrectomy after withdrawal of immunosuppression has not yet been fully quantified. In a small study of 49 patients, researchers showed a significant difference in rates of sensitization post allograft failure depending on the rate at which immunosuppressant drugs were weaned. When immunosuppression was weaned in 3 months or less, there was a lower percentage of subjects who remained unsensitized (30%) compared with when immunosuppression was weaned over >3 months (66%) (P = 0.01). This compared with no significant difference in sensitization between the two groups before primary allograft placement. There was no significant difference in nephrectomy after graft failure in either group. Half the patients in the study had the antimetabolite weaned off first while the other half had the calcineurin inhibitor weaned first, but the study was not powered to look at differences in immunosuppressant regimens. Time-dependent Cox models found no association between prolonged immunosuppression withdrawal and mortality; again, the study was not originally powered to show a mortality difference. Thus they concluded that prolonged withdrawal may be safe [57].
The role of the failed allograft as a potential sump for anti-HLA antibodies deserves further discussion. While some have suggested that the high incidence of DSA after transplant nephrectomy could be related to the improvement of T-cell proliferation and activation upon return to dialysis, others have shown a very rapid and early increase of DSAs after the nephrectomy. This suggests that the DSAs are absorbed by the kidney and that its removal allows their appearance in the circulation.
Changes in technology have improved methods for detecting changes in antibody character before and after nephrectomy. Studies examining this lend further credence to the argument that the failed kidney sequesters antibodies. Researchers at the University of Pittsburgh used single antigen bead technology to measure anti-HLA antibodies in patient sera pre and post-nephrectomy. They found a limited ability of the system to detect donor-reactive antibodies in the presence of the failed allograft. They found a significant increase in DSA reacting with class I HLA-A and B antigens pre- and post-nephrectomy as well as with class II HLA-DRB1 antigens. They concluded that the presence of the failed graft might therefore lead to incomplete characterization of HLA-A, B and DRB1 antigen mismatches before re-transplantation [58].
CONCLUSION
More study is needed to determine the optimal management of patients whose transplanted kidney has failed. Further evidence-based recommendations will require randomized controlled trials of immunosuppression algorithms in sufficiently large populations to yield meaningful results. Maintaining immunosuppression versus weaning immunosuppression needs to be studied to understand the effects of either strategy on preservation of residual renal function; risk of infection; risk of acute rejection; effect on sensitization and subsequent renal transplantation; risk of cancer. If there is a benefit to weaning immunosuppression, then an optimal weaning protocol must be defined—one that maintains residual renal function and prevents patients from rejecting or becoming highly sensitized while minimizing the risk of infection, sepsis and infection-related cancers. And further study is needed to refine the role of nephrectomy in the patient with a failed allograft, especially if there are plans for retransplantation. That includes a need for further study to evaluate whether there is a role for immunosuppression after removal of the failed allograft. Finally, there is room for improvement in preparing patients with a failed kidney transplant for return to dialysis, particularly in planning vascular access to minimize the use of central venous catheters and identifying whether there is an optimal modality.
CONFLICT OF INTEREST STATEMENT
The authors report no conflicts of interest. The results presented in this paper have not been published previously in whole or part, except in abstract format.
REFERENCES
Author notes
Kassakian and Ajmal contributed equally to this work.
Comments