All of lore.kernel.org
 help / color / mirror / Atom feed
* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-02-07 17:02 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-02-07 17:02 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL4
Changes by:	rmccabe at sourceware.org	2007-02-07 17:02:18

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: ModelBuilder.py cluster_adapters.py 

Log message:
	Fix for bz225558 (conga does not add fence_xvmd to cluster.conf)

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.176.2.1&r2=1.176.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ModelBuilder.py.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.19.2.1&r2=1.19.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.227.2.1&r2=1.227.2.2

--- conga/luci/cluster/form-macros	2007/02/05 21:27:22	1.176.2.1
+++ conga/luci/cluster/form-macros	2007/02/07 17:02:17	1.176.2.2
@@ -943,6 +943,13 @@
 							tal:attributes="value clusterinfo/pjd" />
 					</td>
 				</tr>
+				<tr class="systemsTable">
+					<td class="systemsTable">Run XVM fence daemon</td>
+					<td class="systemsTable">
+						<input type="checkbox" name="run_xvmd"
+							tal:attributes="checked python: ('fence_xvmd' in clusterinfo and clusterinfo['fence_xvmd']) and 'checked' or ''" />
+					</td>
+				</tr>
 			</tbody>
 
 			<tfoot class="systemsTable">
--- conga/luci/site/luci/Extensions/ModelBuilder.py	2007/02/05 21:27:22	1.19.2.1
+++ conga/luci/site/luci/Extensions/ModelBuilder.py	2007/02/07 17:02:18	1.19.2.2
@@ -27,6 +27,7 @@
 from Samba import Samba
 from Multicast import Multicast
 from FenceDaemon import FenceDaemon
+from FenceXVMd import FenceXVMd
 from Netfs import Netfs
 from Clusterfs import Clusterfs
 from Resources import Resources
@@ -56,6 +57,7 @@
            'rm':Rm,
            'service':Service,
            'vm':Vm,
+           'fence_xvmd':FenceXVMd,
            'resources':Resources,
            'failoverdomain':FailoverDomain,
            'failoverdomains':FailoverDomains,
@@ -85,6 +87,7 @@
 FENCEDAEMON_PTR_STR="fence_daemon"
 SERVICE="service"
 VM="vm"
+FENCE_XVMD_STR="fence_xvmd"
 GULM_TAG_STR="gulm"
 MCAST_STR="multicast"
 CMAN_PTR_STR="cman"
@@ -119,6 +122,7 @@
     self.isModified = False
     self.quorumd_ptr = None
     self.usesQuorumd = False
+    self.fence_xvmd_ptr = None
     self.unusual_items = list()
     self.isVirtualized = False
     if mcast_addr == None:
@@ -217,6 +221,8 @@
         self.CMAN_ptr = new_object
       elif parent_node.nodeName == MCAST_STR:
         self.usesMulticast = True
+      elif parent_node.nodeName == FENCE_XVMD_STR:
+        self.fence_xvmd_ptr = new_object
 
     else:
       return None
@@ -591,6 +597,22 @@
 
     raise GeneralError('FATAL',"Couldn't find VM name %s in current list" % name)
 
+  def hasFenceXVM(self):
+    return self.fence_xvmd_ptr is not None
+
+  # Right now the fence_xvmd tag is empty, but allow the object
+  # to be passed in case attributes are added in the future.
+  def addFenceXVM(self, obj):
+    if self.fence_xvmd_ptr is not None:
+      self.cluster_ptr.removeChild(self.fence_xvmd_ptr)
+    self.cluster_ptr.addChild(obj)
+    self.fence_xvmd_ptr = obj
+
+  def delFenceXVM(self):
+    if self.fence_xvmd_ptr is not None:
+      self.cluster_ptr.removeChild(self.fence_xvmd_ptr)
+      self.fence_xvmd_ptr = None
+
   def getFenceDevices(self):
     if self.fencedevices_ptr == None:
       return list()
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/02/05 21:27:22	1.227.2.1
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/02/07 17:02:18	1.227.2.2
@@ -24,6 +24,7 @@
 from Tomcat5 import Tomcat5
 from OpenLDAP import OpenLDAP
 from Vm import Vm
+from FenceXVMd import FenceXVMd
 from Script import Script
 from Samba import Samba
 from QuorumD import QuorumD
@@ -1172,6 +1173,18 @@
 	except ValueError, e:
 		errors.append('Invalid post join delay: %s' % str(e))
 
+	run_xvmd = False
+	try:
+		run_xvmd = form.has_key('run_xvmd')
+	except:
+		pass
+
+	if run_xvmd is True and not model.hasFenceXVM():
+		fenceXVMd = FenceXVMd()
+		model.addFenceXVM(fenceXVMd)
+	elif not run_xvmd:
+		model.delFenceXVM()
+
 	try:
 		fd = model.getFenceDaemonPtr()
 		old_pj_delay = fd.getPostJoinDelay()
@@ -3513,6 +3526,7 @@
   #new cluster params - if rhel5
   #-------------
 
+  clumap['fence_xvmd'] = model.hasFenceXVM()
   gulm_ptr = model.getGULMPtr()
   if not gulm_ptr:
     #Fence Daemon Props



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-09-21  3:11 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-09-21  3:11 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2007-09-21 03:11:53

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: FenceHandler.py 

Log message:
	In RHEL5.2 and 4.7 on fence_scsi will accept "nodename," too, but use "node"
	for backward compatibility with older cluster nodes.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.211&r2=1.212
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FenceHandler.py.diff?cvsroot=cluster&r1=1.24&r2=1.25

--- conga/luci/cluster/form-macros	2007/09/11 16:04:32	1.211
+++ conga/luci/cluster/form-macros	2007/09/21 03:11:53	1.212
@@ -3017,8 +3017,8 @@
 			<tr>
 				<td>Node name</td>
 				<td>
-					<input type="text" name="nodename" disabled="disabled"
-						tal:attributes="value request/nodename | nothing" />
+					<input type="text" name="node" disabled="disabled"
+						tal:attributes="value request/node | nothing" />
 				</td>
 			</tr>
 		</table>
--- conga/luci/site/luci/Extensions/FenceHandler.py	2007/09/21 03:02:46	1.24
+++ conga/luci/site/luci/Extensions/FenceHandler.py	2007/09/21 03:11:53	1.25
@@ -996,10 +996,10 @@
 	errors = list()
 
 	try:
-		nodename = form['nodename'].strip()
+		nodename = form['node'].strip()
 		if not nodename:
 			raise Exception, 'blank'
-		fenceinst.addAttribute('nodename', nodename)
+		fenceinst.addAttribute('node', nodename)
 	except Exception, e:
 		errors.append(FI_PROVIDE_NODENAME)
 



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-06-19 15:54 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-06-19 15:54 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	rmccabe at sourceware.org	2007-06-19 15:54:10

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: LuciClusterActions.py 
	                           LuciClusterInfo.py 
	                           cluster_adapters.py 
	                           conga_constants.py 
Removed files:
	luci/python/Extensions: .cvsignore 

Log message:
	Fix bz238726: Conga provides no way to remove a dead node from a cluster (depends on 244867)

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.90.2.23&r2=1.90.2.24
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/python/Extensions/.cvsignore.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/LuciClusterActions.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.1.4.1&r2=1.1.4.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/LuciClusterInfo.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.1.4.1&r2=1.1.4.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.120.2.30&r2=1.120.2.31
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/conga_constants.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.19.2.10&r2=1.19.2.11

--- conga/luci/cluster/form-macros	2007/06/18 18:39:31	1.90.2.23
+++ conga/luci/cluster/form-macros	2007/06/19 15:54:09	1.90.2.24
@@ -3117,7 +3117,11 @@
 					<option tal:attributes="value nodeinfo/delete_url"
 						tal:condition="python: not 'ricci_error' in nodeinfo">
 						Delete this node</option>
+					<option tal:attributes="value nodeinfo/force_delete_url"
+						tal:condition="python: 'ricci_error' in nodeinfo">
+						Force the deletion of this node</option>
 				</select>
+
 				<input type="button" value="Go"
 					onclick="if (this.form.gourl[this.form.gourl.selectedIndex].value && confirm(this.form.gourl[this.form.gourl.selectedIndex].text + '?')) return dropdown(this.form.gourl)" />
 				</form>
@@ -3129,6 +3133,7 @@
 				<select name="gourl">
 					<option value="">Choose a Task...</option>
 					<option tal:attributes="value nodeinfo/fence_url | nothing">Fence this node</option>
+					<option tal:attributes="value nodeinfo/force_delete_url | nothing">Force the deletion of this node</option>
 				</select>
 				<input type="button" value="Go"
 					onclick="if (this.form.gourl[this.form.gourl.selectedIndex].value && confirm(this.form.gourl[this.form.gourl.selectedIndex].text + '?')) return dropdown(this.form.gourl)" />
@@ -3537,6 +3542,7 @@
 						<select class="node" name="gourl">
 							<option value="">Choose a Task...</option>
 							<option tal:attributes="value nd/fence_it_url | nothing">Fence this node</option>
+							<option tal:attributes="value nd/force_delete_url| nothing">Force the deletion of this node</option>
 						</select>
 						<input type="button" value="Go"
 							onclick="if (this.form.gourl[this.form.gourl.selectedIndex].value && confirm(this.form.gourl[this.form.gourl.selectedIndex].text + '?')) return dropdown(this.form.gourl)" />
--- conga/luci/site/luci/Extensions/Attic/LuciClusterActions.py	2007/06/18 18:39:32	1.1.4.1
+++ conga/luci/site/luci/Extensions/Attic/LuciClusterActions.py	2007/06/19 15:54:10	1.1.4.2
@@ -17,7 +17,7 @@
 	CLUSTER_NODE_NEED_AUTH
 
 from conga_constants import CLUSTER_CONFIG, LUCI_DEBUG_MODE, \
-	NODE_DELETE, CLUSTER_DELETE, CLUSTERLIST, \
+	NODE_DELETE, NODE_FORCE_DELETE, CLUSTER_DELETE, CLUSTERLIST, \
 	NODE_FENCE, NODE_JOIN_CLUSTER, NODE_LEAVE_CLUSTER, NODE_REBOOT, \
 	RESOURCE_ADD, RESOURCE_CONFIG, RESOURCE_REMOVE, \
 	SERVICE_DELETE, SERVICE_RESTART, SERVICE_START, SERVICE_STOP
@@ -283,6 +283,64 @@
 				% (nodename_resolved, e, str(e)))
 	return True
 
+def NodeForceDeleteFromCluster(self, model, clustername, nodename, nodename_resolved):
+	rc = getRicciAgent(self, clustername,
+			exclude_names=[ nodename_resolved, nodename ], exclude_busy=True)
+
+	if rc is None:
+		rc = getRicciAgent(self, clustername,
+			exclude_names=[ nodename_resolved, nodename ])
+
+	if rc is None:
+		if LUCI_DEBUG_MODE is True:
+			luci_log.debug_verbose('NFDFC0: no agent to delete node %s "%s"' \
+				% (nodename_resolved, clustername))
+		return None
+
+	try:
+		model.deleteNodeByName(nodename.lower())
+	except Exception, e:
+		if LUCI_DEBUG_MODE is True:
+			luci_log.debug_verbose('NFDFC1: deleteNode %s: %r %s' \
+				% (nodename, e, str(e)))
+		return None
+
+	try:
+		model.setModified(True)
+		str_buf = str(model.exportModelAsString())
+		if not str_buf:
+			raise Exception, 'model string is blank'
+	except Exception, e:
+		if LUCI_DEBUG_MODE is True:
+			luci_log.debug_verbose('NFDFC2: exportModelAsString: %r %s' \
+				% (e, str(e)))
+		return None
+
+	batch_number, result = rq.setClusterConf(rc, str_buf)
+	if batch_number is None or result is None:
+		if LUCI_DEBUG_MODE is True:
+			luci_log.debug_verbose('NFDFC3: batch number is None')
+		return None
+
+	try:
+		ret = delClusterSystem(self, clustername, nodename_resolved)
+		if ret is not None:
+			raise Exception, ret
+	except Exception, e:
+		if LUCI_DEBUG_MODE is True:
+			luci_log.debug_verbose('NFDFC4: error deleting %s: %r %s' \
+				% (nodename_resolved, e, str(e)))
+
+	try:
+		set_node_flag(self, clustername, rc.hostname(),
+			str(batch_number), NODE_FORCE_DELETE,
+			'Forcing the deletion of node "%s"' % nodename)
+	except Exception, e:
+		if LUCI_DEBUG_MODE is True:
+			luci_log.debug_verbose('NFDFC5: failed to set flags: %r %s' \
+				% (e, str(e)))
+	return True
+
 def NodeDeleteFromCluster(	self,
 							rc,
 							model,
@@ -354,7 +412,7 @@
 		batch_number, result = rq.setClusterConf(rc2, str_buf)
 		if batch_number is None:
 			if LUCI_DEBUG_MODE is True:
-				luci_log.debug_verbose('ND8: batch number is None after del node in NTP')
+				luci_log.debug_verbose('ND8: batch number is None')
 			return None
 
 	try:
--- conga/luci/site/luci/Extensions/Attic/LuciClusterInfo.py	2007/06/18 18:39:32	1.1.4.1
+++ conga/luci/site/luci/Extensions/Attic/LuciClusterInfo.py	2007/06/19 15:54:10	1.1.4.2
@@ -17,7 +17,7 @@
 
 from conga_constants import CLUSTER_CONFIG, CLUSTER_DELETE, \
 	CLUSTER_PROCESS, CLUSTER_RESTART, CLUSTER_START, CLUSTER_STOP, \
-	FDOM, FENCEDEV, NODE, NODE_ACTIVE, \
+	NODE_FORCE_DELETE, FDOM, FENCEDEV, NODE, NODE_ACTIVE, \
 	NODE_ACTIVE_STR, NODE_DELETE, NODE_FENCE, NODE_INACTIVE, \
 	NODE_INACTIVE_STR, NODE_JOIN_CLUSTER, NODE_LEAVE_CLUSTER, \
 	NODE_PROCESS, NODE_REBOOT, NODE_UNKNOWN, NODE_UNKNOWN_STR, \
@@ -149,18 +149,10 @@
 	if not doc:
 		try:
 			from LuciDB import getClusterStatusDB
+			fvars = GetReqVars(request, [ 'clustername' ])
 
-			clustername = cluname
+			clustername = fvars['clustername']
 			if clustername is None:
-				try:
-					clustername = request['clustername']
-				except:
-					try:
-						clustername = request.form['clustername']
-					except:
-						pass
-
-			if not clustername:
 				raise Exception, 'unable to determine cluster name'
 
 			cinfo = getClusterStatusDB(self, clustername)
@@ -860,6 +852,8 @@
 	else:
 		infohash['fence_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
 			% (baseurl, NODE_PROCESS, NODE_FENCE, nodename, clustername)
+		infohash['force_delete_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+			% (baseurl, NODE_PROCESS, NODE_FORCE_DELETE, nodename, clustername)
 
 	# figure out current services running on this node
 	svc_dict_list = list()
@@ -1021,6 +1015,8 @@
 		else:
 			nl_map['fence_it_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
 				% (baseurl, NODE_PROCESS, NODE_FENCE, name, clustername)
+			nl_map['force_delete_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+				% (baseurl, NODE_PROCESS, NODE_FORCE_DELETE, name, clustername)
 
 		# figure out current services running on this node
 		svc_dict_list = list()
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/06/18 18:39:32	1.120.2.30
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/06/19 15:54:10	1.120.2.31
@@ -35,12 +35,12 @@
 	DISABLE_SVC_TASK, ENABLE_SVC_TASK, FDOM, FDOM_ADD, FENCEDEV, \
 	FENCEDEV_NODE_CONFIG, FENCEDEVS, FLAG_DESC, INSTALL_TASK, \
 	LAST_STATUS, LUCI_DEBUG_MODE, NODE, NODE_ADD, NODE_DELETE, \
-	NODE_FENCE, NODE_JOIN_CLUSTER, NODE_LEAVE_CLUSTER, NODE_REBOOT, \
-	NODES, POSSIBLE_REBOOT_MESSAGE, PRE_CFG, PRE_INSTALL, PRE_JOIN, \
-	REBOOT_TASK, REDIRECT_MSG, RESOURCES, RICCI_CONNECT_FAILURE, \
+	NODE_FENCE, NODE_FORCE_DELETE, NODE_JOIN_CLUSTER, NODE_LEAVE_CLUSTER, \
+	NODE_REBOOT, NODES, POSSIBLE_REBOOT_MESSAGE, PRE_CFG, PRE_INSTALL, \
+	PRE_JOIN, REBOOT_TASK, REDIRECT_MSG, RESOURCES, RICCI_CONNECT_FAILURE, \
 	RICCI_CONNECT_FAILURE_MSG, SEND_CONF, SERVICE_ADD, SERVICE_CONFIG, \
 	SERVICE_LIST, SERVICES, START_NODE, TASKTYPE, VM_ADD, VM_CONFIG, \
-	REDIRECT_SEC
+	REDIRECT_SEC, LUCI_CLUSTER_BASE_URL
 
 from FenceHandler import validateNewFenceDevice, \
 	validateFenceDevice, validate_fenceinstance, \
@@ -718,7 +718,7 @@
 	errors = list()
 
 	try:
-		form_xml = request['form_xml']
+		form_xml = request['form_xml'].strip()
 		if not form_xml:
 			raise KeyError, 'form_xml must not be blank'
 	except Exception, e:
@@ -740,7 +740,7 @@
 		doc = minidom.parseString(form_xml)
 		forms = doc.getElementsByTagName('form')
 		if len(forms) < 1:
-			raise
+			raise Exception, 'invalid XML'
 	except Exception, e:
 		if LUCI_DEBUG_MODE is True:
 			luci_log.debug_verbose('vSA1: error: %r %s' % (e, str(e)))
@@ -1681,7 +1681,7 @@
 	errors = list()
 
 	try:
-		form_xml = request['fence_xml']
+		form_xml = request['fence_xml'].strip()
 		if not form_xml:
 			raise KeyError, 'form_xml must not be blank'
 	except Exception, e:
@@ -2637,7 +2637,9 @@
 	return getRicciAgent(self, clustername)
 
 def clusterTaskProcess(self, model, request):
-	fvar = GetReqVars(request, [ 'task', 'clustername' ])
+	fvar = GetReqVars(request, [ 'task', 'clustername', 'URL' ])
+
+	baseurl = fvar['URL'] or LUCI_CLUSTER_BASE_URL
 
 	task = fvar['task']
 	if task is None:
@@ -2646,7 +2648,7 @@
 		return 'No cluster task was given'
 
 	if not model:
-		cluname = fvar['cluname']
+		cluname = fvar['clustername']
 		if cluname is None:
 			if LUCI_DEBUG_MODE is True:
 				luci_log.debug('CTP1: no cluster name')
@@ -2684,7 +2686,7 @@
 
 	response = request.RESPONSE
 	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
-		% (request['URL'], redirect_page, model.getClusterName()))
+		% (baseurl, redirect_page, model.getClusterName()))
 
 def nodeTaskProcess(self, model, request):
 	fvar = GetReqVars(request, [ 'task', 'clustername', 'nodename', 'URL' ])
@@ -2692,6 +2694,7 @@
 	task = fvar['task']
 	clustername = fvar['clustername']
 	nodename = fvar['nodename']
+	baseurl = fvar['URL'] or LUCI_CLUSTER_BASE_URL
 
 	if clustername is None:
 		if LUCI_DEBUG_MODE is True:
@@ -2711,10 +2714,9 @@
 	nodename_resolved = resolve_nodename(self, clustername, nodename)
 	response = request.RESPONSE
 
-	if task != NODE_FENCE:
-		# Fencing is the only task for which we don't
-		# want to talk to the node on which the action is
-		# to be performed.
+	if task != NODE_FENCE and task != NODE_FORCE_DELETE:
+		# Fencing and forced deletion are the only tasks
+		# for which we don't want to talk to the target node.
 		try:
 			rc = RicciCommunicator(nodename_resolved)
 			if not rc:
@@ -2773,7 +2775,7 @@
 			return (False, {'errors': [ 'Node "%s" failed to leave cluster "%s"' % (nodename_resolved, clustername) ]})
 
 		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
-			% (request['URL'], NODES, clustername))
+			% (baseurl, NODES, clustername))
 	elif task == NODE_JOIN_CLUSTER:
 		from LuciClusterActions import NodeJoinCluster
 		if NodeJoinCluster(self, rc, clustername, nodename_resolved) is None:
@@ -2782,7 +2784,7 @@
 			return (False, {'errors': [ 'Node "%s" failed to join cluster "%s"' % (nodename_resolved, clustername) ]})
 
 		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
-			% (request['URL'], NODES, clustername))
+			% (baseurl, NODES, clustername))
 	elif task == NODE_REBOOT:
 		from LuciClusterActions import NodeReboot
 		if NodeReboot(self, rc, clustername, nodename_resolved) is None:
@@ -2792,7 +2794,7 @@
 				% nodename_resolved ]})
 
 		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
-			% (request['URL'], NODES, clustername))
+			% (baseurl, NODES, clustername))
 	elif task == NODE_FENCE:
 		from LuciClusterActions import NodeFence
 		if NodeFence(self, clustername, nodename, nodename_resolved) is None:
@@ -2801,7 +2803,7 @@
 			return (False, {'errors': [ 'Fencing of node "%s" failed' \
 				% nodename_resolved]})
 		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
-			% (request['URL'], NODES, clustername))
+			% (baseurl, NODES, clustername))
 	elif task == NODE_DELETE:
 		from LuciClusterActions import NodeDeleteFromCluster
 		if NodeDeleteFromCluster(self, rc, model, clustername, nodename, nodename_resolved) is None:
@@ -2810,7 +2812,16 @@
 			return (False, {'errors': [ 'Deletion of node "%s" from cluster "%s" failed' % (nodename_resolved, clustername) ]})
 
 		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
-			% (request['URL'], NODES, clustername))
+			% (baseurl, NODES, clustername))
+	elif task == NODE_FORCE_DELETE:
+		from LuciClusterActions import NodeForceDeleteFromCluster
+		if NodeForceDeleteFromCluster(self, model, clustername, nodename, nodename_resolved) is None:
+			if LUCI_DEBUG_MODE is True:
+				luci_log.debug_verbose('NTP13: nodeForceDelete failed')
+			return (False, {'errors': [ 'Deletion of node "%s" from cluster "%s" failed' % (nodename_resolved, clustername) ]})
+
+		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+			% (baseurl, NODES, clustername))
 
 def isClusterBusy(self, req):
 	items = None
@@ -3178,11 +3189,13 @@
 
 	fvars = GetReqVars(req,
 				[ 'clustername', 'servicename', 'nodename', 'URL' ])
+	baseurl = fvars['URL'] or LUCI_CLUSTER_BASE_URL
+
 	ret = RestartCluSvc(self, rc, fvars)
 	if ret is None:
 		response = req.RESPONSE
 		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
-			% (fvars['URL'], SERVICE_LIST, fvars['clustername']))
+			% (baseurl, SERVICE_LIST, fvars['clustername']))
 	else:
 		return ret
 
@@ -3191,11 +3204,13 @@
 
 	fvars = GetReqVars(req,
 				[ 'clustername', 'servicename', 'nodename', 'URL' ])
+	baseurl = fvars['URL'] or LUCI_CLUSTER_BASE_URL
+
 	ret = StopCluSvc(self, rc, fvars)
 	if ret is None:
 		response = req.RESPONSE
 		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
-			% (fvars['URL'], SERVICE_LIST, fvars['clustername']))
+			% (baseurl, SERVICE_LIST, fvars['clustername']))
 	else:
 		return ret
 
@@ -3204,11 +3219,13 @@
 
 	fvars = GetReqVars(req,
 				[ 'clustername', 'servicename', 'nodename', 'URL' ])
+	baseurl = fvars['URL'] or LUCI_CLUSTER_BASE_URL
+
 	ret = StartCluSvc(self, rc, fvars)
 	if ret is None:
 		response = req.RESPONSE
 		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
-			% (fvars['URL'], SERVICE_LIST, fvars['clustername']))
+			% (baseurl, SERVICE_LIST, fvars['clustername']))
 	else:
 		return ret
 
@@ -3217,6 +3234,8 @@
 
 	fvars = GetReqVars(req,
 				[ 'clustername', 'servicename', 'nodename', 'URL' ])
+	baseurl = fvars['URL'] or LUCI_CLUSTER_BASE_URL
+
 	try:
 		model = LuciExtractCluModel(self, req,
 					cluster_name=fvars['clustername'])
@@ -3229,7 +3248,7 @@
 	if ret is None:
 		response = req.RESPONSE
 		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
-			% (fvars['URL'], SERVICES, fvars['clustername']))
+			% (baseurl, SERVICES, fvars['clustername']))
 	else:
 		return ret
 
@@ -3238,11 +3257,13 @@
 
 	fvars = GetReqVars(req,
 				[ 'clustername', 'servicename', 'nodename', 'URL' ])
+	baseurl = fvars['URL'] or LUCI_CLUSTER_BASE_URL
+
 	ret = MigrateCluSvc(self, rc, fvars)
 	if ret is None:
 		response = req.RESPONSE
 		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
-			% (fvars['URL'], SERVICE_LIST, fvars['clustername']))
+			% (baseurl, SERVICE_LIST, fvars['clustername']))
 	else:
 		return ret
 
@@ -3251,6 +3272,8 @@
 
 	fvars = GetReqVars(req,
 		[ 'clustername', 'resourcename', 'nodename', 'URL' ])
+	baseurl = fvars['URL'] or LUCI_CLUSTER_BASE_URL
+
 	try:
 		model = LuciExtractCluModel(self, req,
 					cluster_name=fvars['clustername'])
@@ -3265,12 +3288,14 @@
 	if ret is None:
 		response = req.RESPONSE
 		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
-			% (fvars['URL'], RESOURCES, fvars['clustername']))
+			% (baseurl, RESOURCES, fvars['clustername']))
 	else:
 		return ret
 
 def resourceAdd(self, req, model, res):
 	from LuciClusterActions import AddResource, EditResource
+	fvars = GetReqVars(req, [ 'URL' ])
+	baseurl = fvars['URL'] or LUCI_CLUSTER_BASE_URL
 
 	try:
 		cluname = model.getClusterName()
@@ -3291,7 +3316,7 @@
 	if ret is None:
 		response = req.RESPONSE
 		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
-			% (req['URL'], RESOURCES, cluname))
+			% (baseurl, RESOURCES, cluname))
 	else:
 		return ret
 
--- conga/luci/site/luci/Extensions/conga_constants.py	2007/06/18 18:39:33	1.19.2.10
+++ conga/luci/site/luci/Extensions/conga_constants.py	2007/06/19 15:54:10	1.19.2.11
@@ -59,17 +59,18 @@
 SYS_SERVICE_UPDATE		= '91'
 
 # Cluster tasks
-CLUSTER_STOP	= '1000'
-CLUSTER_START	= '1001'
-CLUSTER_RESTART	= '1002'
-CLUSTER_DELETE	= '1003'
+CLUSTER_STOP			= '1000'
+CLUSTER_START			= '1001'
+CLUSTER_RESTART			= '1002'
+CLUSTER_DELETE			= '1003'
 
 # Node tasks
-NODE_LEAVE_CLUSTER	= '100'
-NODE_JOIN_CLUSTER	= '101'
-NODE_REBOOT			= '102'
-NODE_FENCE			= '103'
-NODE_DELETE			= '104'
+NODE_LEAVE_CLUSTER		= '100'
+NODE_JOIN_CLUSTER		= '101'
+NODE_REBOOT				= '102'
+NODE_FENCE				= '103'
+NODE_DELETE				= '104'
+NODE_FORCE_DELETE		= '105'
 
 # General tasks
 BASECLUSTER	= '201'



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-05-03 20:16 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-05-03 20:16 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	EXPERIMENTAL
Changes by:	rmccabe at sourceware.org	2007-05-03 20:16:38

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: FenceHandler.py HelperFunctions.py 
	                           StorageReport.py Variable.py 
	                           cluster_adapters.py 
	                           conga_constants.py 
	                           conga_storage_constants.py 
	                           homebase_adapters.py 
	                           ricci_communicator.py 
	                           ricci_defines.py storage_adapters.py 
	                           system_adapters.py 
Added files:
	luci/site/luci/Extensions: LuciDB.py ResourceHandler.py 
	                           RicciQueries.py 
	luci/site/luci/Extensions/ClusterModel: Apache.py 
	                                        BaseResource.py 
	                                        Cluster.py 
	                                        ClusterNode.py 
	                                        ClusterNodes.py 
	                                        Clusterfs.py Cman.py 
	                                        Device.py 
	                                        FailoverDomain.py 
	                                        FailoverDomainNode.py 
	                                        FailoverDomains.py 
	                                        Fence.py FenceDaemon.py 
	                                        FenceDevice.py 
	                                        FenceDevices.py 
	                                        FenceXVMd.py Fs.py 
	                                        GeneralError.py Gulm.py 
	                                        Heuristic.py Ip.py 
	                                        LVM.py Lockserver.py 
	                                        Method.py 
	                                        ModelBuilder.py 
	                                        Multicast.py MySQL.py 
	                                        NFSClient.py 
	                                        NFSExport.py Netfs.py 
	                                        OpenLDAP.py Postgres8.py 
	                                        QuorumD.py RefObject.py 
	                                        Resources.py Rm.py 
	                                        Samba.py Script.py 
	                                        Service.py TagObject.py 
	                                        Tomcat5.py Totem.py 
	                                        Vm.py __init__.py 
Removed files:
	luci/site/luci/Extensions: Apache.py BaseResource.py Cluster.py 
	                           ClusterNode.py ClusterNodes.py 
	                           Clusterfs.py Cman.py Device.py 
	                           FailoverDomain.py 
	                           FailoverDomainNode.py 
	                           FailoverDomains.py Fence.py 
	                           FenceDaemon.py FenceDevice.py 
	                           FenceDevices.py FenceXVMd.py Fs.py 
	                           GeneralError.py Gulm.py Heuristic.py 
	                           Ip.py LVM.py Lockserver.py Method.py 
	                           ModelBuilder.py Multicast.py MySQL.py 
	                           NFSClient.py NFSExport.py Netfs.py 
	                           OpenLDAP.py Postgres8.py QuorumD.py 
	                           README.txt RefObject.py Resources.py 
	                           Rm.py Samba.py Script.py Service.py 
	                           ServiceData.py TagObject.py 
	                           Tomcat5.py Totem.py Vm.py 
	                           clui_constants.py permission_check.py 
	                           ricci_bridge.py 

Log message:
	Big luci code refactor and cleanup, part 1.
	
	This is broken right now. Don't use this branch.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.198&r2=1.198.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/LuciDB.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ResourceHandler.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/RicciQueries.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FenceHandler.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.18&r2=1.18.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/HelperFunctions.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.6&r2=1.6.4.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/StorageReport.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.23&r2=1.23.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Variable.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.4&r2=1.4.8.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.255&r2=1.255.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/conga_constants.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.39&r2=1.39.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/conga_storage_constants.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.8&r2=1.8.8.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/homebase_adapters.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.50&r2=1.50.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ricci_communicator.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.25&r2=1.25.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ricci_defines.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=1.1.8.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/storage_adapters.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.9&r2=1.9.4.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/system_adapters.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=1.2.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Apache.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/BaseResource.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Cluster.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.5&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterNode.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterNodes.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Clusterfs.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Cman.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Device.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FailoverDomain.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FailoverDomainNode.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FailoverDomains.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Fence.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FenceDaemon.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FenceDevice.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.3&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FenceDevices.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FenceXVMd.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Fs.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/GeneralError.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Gulm.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Heuristic.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Ip.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/LVM.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Lockserver.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Method.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ModelBuilder.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.26&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Multicast.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/MySQL.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/NFSClient.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/NFSExport.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Netfs.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/OpenLDAP.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Postgres8.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/QuorumD.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/README.txt.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/RefObject.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Resources.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Rm.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Samba.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Script.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Service.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ServiceData.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/TagObject.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.3&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Tomcat5.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Totem.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Vm.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.4&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/clui_constants.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/permission_check.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ricci_bridge.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.62&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Apache.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/BaseResource.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Cluster.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/ClusterNode.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/ClusterNodes.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Clusterfs.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Cman.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Device.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/FailoverDomain.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/FailoverDomainNode.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/FailoverDomains.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Fence.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/FenceDaemon.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/FenceDevice.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/FenceDevices.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/FenceXVMd.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Fs.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/GeneralError.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Gulm.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Heuristic.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Ip.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/LVM.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Lockserver.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Method.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/ModelBuilder.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Multicast.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/MySQL.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/NFSClient.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/NFSExport.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Netfs.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/OpenLDAP.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Postgres8.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/QuorumD.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/RefObject.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Resources.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Rm.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Samba.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Script.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Service.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/TagObject.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Tomcat5.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Totem.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Vm.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/__init__.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1

--- conga/luci/cluster/form-macros	2007/03/15 16:41:11	1.198
+++ conga/luci/cluster/form-macros	2007/05/03 20:16:37	1.198.2.1
@@ -4062,9 +4062,76 @@
 	<div class="service_comp_list">
 	<table class="systemsTable">
 		<thead class="systemsTable">
-			<tr class="systemsTable"><td class="systemsTable">
-				<p class="reshdr">Properties for <tal:block tal:replace="vminfo/name | string:virtual machine service"/></p>
-			</td></tr>
+			<tr class="systemsTable">
+				<td class="systemsTable">
+					<p class="reshdr">Properties for <tal:block tal:replace="vminfo/name | string:virtual machine service"/></p>
+				</td>
+			</tr>
+
+			<tr class="systemsTable">
+				<td class="cluster service service_action"
+					tal:condition="python: sinfo and 'innermap' in sinfo">
+				<form method="post">
+					<input type="hidden" name="pagetype" tal:attributes="
+						value request/pagetype | request/form/pagetype | nothing" />
+					<select name="gourl"
+						tal:define="global innermap sinfo/innermap;
+						starturls innermap/links">
+
+						<option value="">Choose a Task...</option>
+						<tal:block tal:condition="running">
+							<option
+								tal:attributes="value innermap/restarturl">Restart this service</option>
+
+							<option
+								tal:attributes="value innermap/disableurl">Disable this service</option>
+
+							<option value="">----------</option>
+
+							<tal:block tal:repeat="starturl innermap/links">
+								<option
+									tal:condition="not:exists: starturl/migrate"
+									tal:attributes="value starturl/url">Relocate this service to <span tal:replace="starturl/nodename" />
+								</option>
+							</tal:block>
+
+							<tal:block tal:condition="svc/is_vm | nothing">
+								<option value="">----------</option>
+								<tal:block tal:repeat="starturl innermap/links">
+									<option
+										tal:condition="exists: starturl/migrate"
+										tal:attributes="value starturl/url">Migrate this service to <span tal:replace="starturl/nodename" /></option>
+								</tal:block>
+							</tal:block>
+						</tal:block>
+
+						<tal:block tal:condition="not: running">
+							<option
+								tal:attributes="value innermap/enableurl">Enable this service</option>
+							<option value="">----------</option>
+
+							<tal:block tal:repeat="starturl innermap/links">
+								<option
+									tal:condition="not:exists: starturl/migrate"
+									tal:attributes="value starturl/url">Start this service on <span tal:replace="starturl/nodename" />
+								</option>
+							</tal:block>
+
+							<option value="">----------</option>
+
+							<option
+								tal:attributes="value innermap/delurl | nothing"
+								tal:content="string:Delete this service" />
+						</tal:block>
+					</select>
+
+					<input type="button" value="Go"
+						onclick="if (this.form.gourl[this.form.gourl.selectedIndex].value && confirm(this.form.gourl[this.form.gourl.selectedIndex].text + '?')) return dropdown(this.form.gourl)" />
+				</form>
+				</td>
+			</tr>
+		</thead>
+
 		<tfoot class="systemsTable">
 			<tr class="systemsTable">
 				<td>Automatically start this service</td>
@@ -4382,7 +4449,7 @@
 	<table class="cluster service" width="100%">
 		<tr class="cluster service info_top">
 			<td class="cluster service service_name">
-				<strong class="service_name">Service Name:</strong>
+				<strong class="service_name">Service Name</strong>
 				<span
 					tal:content="sinfo/name | nothing"
 					tal:attributes="class python: running and 'running' or 'stopped'" />
@@ -4413,7 +4480,7 @@
 								</option>
 							</tal:block>
 
-							<tal:block tal:condition="svc/is_vm | nothing">
+							<tal:block tal:condition="innermap/is_vm | nothing">
 								<option value="">----------</option>
 								<tal:block tal:repeat="starturl innermap/links">
 									<option
@@ -4451,8 +4518,18 @@
 
 		<tr class="cluster service info_middle">
 			<td class="cluster service service_status">
-				<strong>Service Status:</strong>
-				<span tal:replace="python: running and 'Running' or 'Stopped'" />
+				<strong>Service Status</strong>
+
+				<tal:block tal:condition="running">
+					<span tal:condition="exists:innermap/current"
+						tal:replace="innermap/current | nothing" />
+					<span tal:condition="not:exists:innermap/current"
+						tal:replace="string:Running" />
+				</tal:block>
+
+				<tal:block tal:condition="not:running">
+					Stopped
+				</tal:block>
 			</td>
 		</tr>
 	</table>
--- conga/luci/site/luci/Extensions/FenceHandler.py	2007/02/12 23:26:54	1.18
+++ conga/luci/site/luci/Extensions/FenceHandler.py	2007/05/03 20:16:38	1.18.2.1
@@ -1,6 +1,7 @@
-import re
-from Device import Device
-from conga_constants import FD_VAL_SUCCESS, FD_VAL_FAIL
+from ClusterModel.Device import Device
+
+FD_VAL_FAIL = 1
+FD_VAL_SUCCESS = 0
 
 FD_NEW_SUCCESS = 'New %s successfully added to cluster'
 FD_UPDATE_SUCCESS = 'Fence device %s successfully updated'
@@ -144,10 +145,11 @@
 	'fence_manual': ['name']
 }
 
-ILLEGAL_CHARS = re.compile(':| ')
 
 def makeNCName(name):
 	### name must conform to relaxNG ID type ##
+	import re
+	ILLEGAL_CHARS = re.compile(':| ')
 	return ILLEGAL_CHARS.sub('_', name)
 
 def check_unique_fd_name(model, name):
@@ -158,7 +160,7 @@
 	return True
 
 def validateNewFenceDevice(form, model):
-	from FenceDevice import FenceDevice
+	from ClusterModel.FenceDevice import FenceDevice
 	fencedev = FenceDevice()
 
 	try:
@@ -174,7 +176,6 @@
 	return (FD_VAL_FAIL, ret)
 
 def validateFenceDevice(form, model):
-	from FenceDevice import FenceDevice
 	try:
 		old_fence_name = form['orig_name'].strip()
 		if not old_fence_name:
--- conga/luci/site/luci/Extensions/HelperFunctions.py	2006/12/06 22:34:09	1.6
+++ conga/luci/site/luci/Extensions/HelperFunctions.py	2007/05/03 20:16:38	1.6.4.1
@@ -1,27 +1,63 @@
-
-import AccessControl
-
+from AccessControl import getSecurityManager
+from ricci_communicator import RicciCommunicator, CERTS_DIR_PATH
+from conga_constants import PLONE_ROOT
 import threading
-from ricci_communicator import RicciCommunicator
 
+def siteIsSetup(self):
+	import os
+	try:
+		return os.path.isfile('%sprivkey.pem' % CERTS_DIR_PATH) and os.path.isfile('%scacert.pem' % CERTS_DIR_PATH)
+	except:
+		pass
+	return False
+
+def strFilter(regex, replaceChar, arg):
+	import re
+	return re.sub(regex, replaceChar, arg)
+
+def userAuthenticated(self):
+	try:
+		if (isAdmin(self) or getSecurityManager().getUser().has_role('Authenticated', self.restrictedTraverse(PLONE_ROOT))):
+			return True
+	except Exception, e:
+		luci_log.debug_verbose('UA0: %s' % str(e)) 
+	return False
+
+def isAdmin(self):
+	try:
+		return getSecurityManager().getUser().has_role('Owner', self.restrictedTraverse(PLONE_ROOT))
+	except Exception, e:
+		luci_log.debug_verbose('IA0: %s' % str(e)) 
+	return False
+
+def userIsAdmin(self, userId):
+	try:
+		return self.portal_membership.getMemberById(userId).has_role('Owner', self.restrictedTraverse(PLONE_ROOT))
+	except Exception, e:
+		luci_log.debug_verbose('UIA0: %s: %s' % (userId, str(e)))
+	return False
+
+def resolveOSType(os_str):
+	if not os_str or os_str.find('Tikanga') != (-1) or os_str.find('FC6') != (-1) or os_str.find('Zod') != (-1):
+		return 'rhel5'
+	else:
+		return 'rhel4'
 
 def add_commas(self, str1, str2):
-    return str1 + '; ' + str2
-
+  return '%s; %s' % (str1, str2)
 
 def allowed_systems(self, user, systems):
   allowed = []
+  sm = getSecurityManager()
+  user = sm.getUser()
   for system in systems:
     #Does this take too long?
-    sm = AccessControl.getSecurityManager()
-    user =  sm.getUser()
-    if user.has_permission("View",system[1]):
+    if user.has_permission('View', system[1]):
       allowed.append(system)
   return allowed
 
-
-def access_to_host_allowed(self, hostname, allowed_systems):
-  for system in allowed_systems:
+def access_to_host_allowed(self, hostname, allowed_systems_list):
+  for system in allowed_systems_list:
     if system[0] == hostname:
       if len(self.allowed_systems(None, [system])) == 1:
           return True
@@ -31,7 +67,6 @@
 
 
 
-
 class Worker(threading.Thread):
     def __init__(self,
                  mutex,
@@ -192,7 +227,7 @@
     elif units.lower() == 'tb':
         return 1024*1024*1024*1024.0
     else:
-        raise "invalid size unit"
+        raise Exception, 'invalid size unit'
 
 def convert_bytes(bytes, units):
     c = int(bytes) / get_units_multiplier(units)
--- conga/luci/site/luci/Extensions/StorageReport.py	2007/03/05 20:45:17	1.23
+++ conga/luci/site/luci/Extensions/StorageReport.py	2007/05/03 20:16:38	1.23.2.1
@@ -6,7 +6,6 @@
 
 from Variable import parse_variable, Variable, VariableList
 from ricci_defines import *
-from PropsObject import PropsObject
 from conga_storage_constants import *
 from HelperFunctions import *
 
@@ -14,7 +13,7 @@
 
 
 
-SESSION_STORAGE_XML_REPORT='storage_xml_report_dir'
+SESSION_STORAGE_XML_REPORT = 'storage_xml_report_dir'
 
 
 
@@ -36,7 +35,7 @@
             except:
                 pass
         if self.__mappers == None or self.__m_temps == None:
-            raise 'invalid storage_xml_report'
+            raise Exception, 'invalid storage_xml_report'
         
         self.__mapp_dir = {} # holds mapper lists by mapper_type
         for mapp_node in self.__mappers:
@@ -85,7 +84,7 @@
     
     def get_mapper(self, id):
         if id == '':
-            raise 'empty mapper_id!!!'
+            raise Exception, 'empty mapper_id!!!'
         for m in self.__mappers:
             if m.getAttribute('mapper_id') == id:
                 return m.cloneNode(True)
@@ -188,7 +187,7 @@
                 if node.nodeName == PROPS_TAG:
                     props = node.cloneNode(True)
         if props == None:
-            raise 'mapper missing properties tag'
+            raise Exception, 'mapper missing properties tag'
         return props
     
     
@@ -334,9 +333,9 @@
     if succ_v.get_value() != True:
         # error
         if err_code_v.get_value() == -1:
-            raise Exception, 'Generic error on host:\n\n' + err_desc_v.get_value()
+            raise Exception, 'Generic error on host:\n\n%s' % err_desc_v.get_value()
         else:
-            raise Exception, 'Host responded: ' + err_desc_v.get_value()
+            raise Exception, 'Host responded: %s' % err_desc_v.get_value()
     
     #xml_report = fr_r.toxml()
     xml_report = fr_r
@@ -444,9 +443,9 @@
     
     type = mapper.getAttribute('mapper_type')
     pretty_type, pretty_target_name, pretty_source_name = get_pretty_mapper_info(type)
-    pretty_name = mapper_id.replace(type + ':', '').replace('/dev/', '')
-    pretty_targets_name = pretty_target_name + 's'
-    pretty_sources_name = pretty_source_name + 's'
+    pretty_name = mapper_id.replace('%s:' % type, '').replace('/dev/', '')
+    pretty_targets_name = '%ss' % pretty_target_name
+    pretty_sources_name = '%ss' % pretty_source_name
     icon_name, dummy1, dummy2 = get_mapper_icons(type)
     color = 'black'
     
@@ -474,21 +473,25 @@
     actions = []
     if removable:
         action = {'name' : 'Remove',
-                  'msg'  : 'Are you sure you want to remove ' + pretty_type + ' \\\'' + pretty_name + '\\\'?',
+                  'msg'  : 'Are you sure you want to remove %s \\\'%s\\\'?' % (pretty_type, pretty_name),
                   'link' : ''}
         actions.append(action)
     if type == MAPPER_VG_TYPE or type == MAPPER_MDRAID_TYPE or type == MAPPER_ATARAID_TYPE or type == MAPPER_MULTIPATH_TYPE:
-        action = {'name' : 'Add ' + mapper_ret['pretty_sources_name'], 
+        action = {'name' : 'Add %s' % mapper_ret['pretty_sources_name'], 
                   'msg'  : '',
-                  'link' : './?' + PAGETYPE + '=' + ADD_SOURCES + '&' + PT_MAPPER_ID + '=' + mapper_ret['mapper_id'] + '&' + PT_MAPPER_TYPE + '=' + mapper_ret['mapper_type']}
+                  'link' : './?%s=%s&%s=%s&%s=%s' % (PAGETYPE, ADD_SOURCES, PT_MAPPER_ID, mapper_ret['mapper_id'], PT_MAPPER_TYPE, mapper_ret['mapper_type'])}
         actions.append(action)
     if type == MAPPER_VG_TYPE:
         for nt in mapper_ret['new_targets']:
             if nt['props']['snapshot']['value'] == 'false':
                 if nt['new']:
-                    action = {'name' : 'New ' + mapper_ret['pretty_target_name'], 
+                    action = {'name' : 'New %s' % mapper_ret['pretty_target_name'], 
                               'msg'  : '',
-                              'link' : './?' + PAGETYPE + '=' + VIEW_BD + '&' + PT_MAPPER_ID + '=' + mapper_ret['mapper_id'] + '&' + PT_MAPPER_TYPE + '=' + mapper_ret['mapper_type'] + '&' + PT_PATH + '=' + nt['path']}
+                              'link' : './?%s=%s&%s=%s&%s=%s&%s=%s' \
+                                 % (PAGETYPE, VIEW_BD,
+                                    PT_MAPPER_ID, mapper_ret['mapper_id'], \
+                                    PT_MAPPER_TYPE, mapper_ret['mapper_type'], \
+                                    PT_PATH, nt['path'])}
                     actions.append(action)
                     break
     mapper_ret['actions'] = actions
@@ -515,7 +518,8 @@
         if snap['props']['snapshot']['value'] != 'true':
             continue
         orig_name = snap['props']['snapshot_origin']['value']
-        snap['description'] += ', ' + orig_name + '\'s Snapshot'
+        snap['description'] = '%s, %s\'s Snapshot' \
+            % (snap['description'], orig_name)
         
         # find origin
         for t in mapper['targets']:
@@ -628,9 +632,9 @@
     
     type = mapper.getAttribute('mapper_type')
     pretty_type, pretty_target_name, pretty_source_name = get_pretty_mapper_info(type)
-    pretty_name = mapper_id.replace(type + ':', '').replace('/dev/', '')
-    pretty_targets_name = pretty_target_name + 's'
-    pretty_sources_name = pretty_source_name + 's'
+    pretty_name = mapper_id.replace('%s:' % type, '').replace('/dev/', '')
+    pretty_targets_name = '%ss' % pretty_target_name
+    pretty_sources_name = '%ss' % pretty_source_name
     icon_name, dummy1, dummy2 = get_mapper_icons(type)
     color = 'black'
     
@@ -713,7 +717,7 @@
                 if request[v] == 'on':
                     sources_num += 1
         if sources_num < int(data['min_sources']) or sources_num > int(data['max_sources']):
-            return 'BAD: Invalid number of ' + data['pretty_sources_name'] + ' selected'
+            return 'BAD: Invalid number of %s selected' % data['pretty_sources_name']
         props = data['props']
         pass
     elif object_type == 'add_sources':
@@ -725,18 +729,18 @@
                 if request[v] == 'on':
                     sources_num += 1
         if sources_num == 0 or sources_num > len(data['new_sources']):
-            return 'BAD: Invalid number of ' + data['pretty_sources_name'] + ' selected'
+            return 'BAD: Invalid number of %s selected' % data['pretty_sources_name']
         pass
     
     if props != None:
         res = check_props(self, props, request)
         if res[0] == False:
-            return res[1] + ' ' + res[2]
+            return '%s %s' % (res[1], res[2])
     
     if content_props != None:
         res = check_props(self, content_props, request)
         if res[0] == False:
-            return res[1] + ' ' + res[2]
+            return '%s %s' % (res[1], res[2])
     
     return 'OK'
 def check_props(self, props, request):
@@ -753,7 +757,7 @@
                     try:
                         req_value = int(req_value)
                     except:
-                        msg = prop['pretty_name'] + ' is missing an integer value'
+                        msg = '%s is missing an integer value' % prop['pretty_name']
                         var_name = prop_name
                         valid = False
                         break
@@ -762,7 +766,8 @@
                     step = int(prop['validation']['step'])
                     r_val = (req_value / step) * step
                     if r_val > max or r_val < min:
-                        msg = prop['pretty_name'] + ' has to be within range ' + str(min) + ' - ' + str(max) + ' ' + prop['units']
+                        msg = '%s has to be within range %d-%d %s' \
+                          % (prop['pretty_name'], min, max, prop['units'])
                         var_name = prop_name
                         valid = False
                         break
@@ -770,7 +775,7 @@
                     try:
                         req_value = float(req_value)
                     except:
-                        msg = prop['pretty_name'] + ' is missing a float value'
+                        msg = '%s is missing a float value' % prop['pretty_name']
                         var_name = prop_name
                         valid = False
                         break
@@ -782,30 +787,33 @@
                         step = 0.000001
                     r_val = (req_value / step) * step
                     if r_val > max or r_val < min:
-                        msg = prop['pretty_name'] + ' has to be within range ' + str(min) + ' - ' + str(max) + ' ' + units
+                        msg = '%s has to be within range %d-%d %s' \
+                          % (prop['pretty_name'], min, max, units)
                         var_name = prop_name
                         valid = False
                         break
             elif prop['type'] == 'text':
                 if len(req_value) < int(prop['validation']['min_length']):
-                    msg = prop['pretty_name'] + ' has to have minimum length of ' + prop['validation']['min_length']
+                    msg = '%s has to have minimum length of %s' \
+                      % (prop['pretty_name'], prop['validation']['min_length'])
                     var_name = prop_name
                     valid = False
                     break
                 elif len(req_value) > int(prop['validation']['max_length']):
-                    msg = prop['pretty_name'] + ' has to have maximum length of ' + prop['validation']['max_length']
+                    msg = '%s has to have maximum length of %s' \
+                      % (prop['pretty_name'], prop['validation']['max_length'])
                     var_name = prop_name
                     valid = False
                     break
                 elif req_value in prop['validation']['reserved_words'].split(';') and req_value != '':
-                    msg = prop['pretty_name'] + ' contains reserved keyword. \nReserved keywords are ' + prop['validation']['reserved_words'].replace(';', ', ')
+                    msg = '%s contains reserved keyword. \nReserved keywords are %s' % (prop['pretty_name'], prop['validation']['reserved_words'].replace(';', ', '))
                     var_name = prop_name
                     valid = False
                     break
                 # check illegal chars
                 for ch in prop['validation']['illegal_chars']:
                     if ch in req_value and ch != '':
-                        msg = prop['pretty_name'] + ' contains illegal character. \nIllegal characters are ' + prop['validation']['illegal_chars'].replace(';', ', ')
+                        msg = '%s contains illegal characters. \nIllegal characters are %s' % (prop['pretty_name'],  prop['validation']['illegal_chars'].replace(';', ', '))
                         var_name = prop_name
                         valid = False
                         break
@@ -816,7 +824,7 @@
 
 def apply(self, ricci, storage_report, request):
     if validate(self, storage_report, request) != 'OK':
-        raise 'Internal error: input not validated!!!'
+        raise Exception, 'Internal error: input not validated!!!'
     
     session = request.SESSION
     
@@ -915,7 +923,7 @@
                     if node.nodeName == VARIABLE_TAG:
                         if node.getAttribute('mutable') == 'true':
                             var_name = node.getAttribute('name')
-                            req_name = 'content_variable_' + selected_content_id + '_' + var_name
+                            req_name = 'content_variable_%s_%s' % (selected_content_id, var_name)
                             if req_name in request:
                                 if selected_content_data['props'][req_name]['type'] == 'int':
                                     if selected_content_data['props'][req_name]['units'] != 'bytes':
@@ -1045,7 +1053,7 @@
                         if node.nodeName == VARIABLE_TAG:
                             if node.getAttribute('mutable') == 'true':
                                 var_name = node.getAttribute('name')
-                                req_name = 'content_variable_' + selected_content_id + '_' + var_name
+                                req_name = 'content_variable_%s_%s' % (selected_content_id, var_name)
                                 if req_name in request:
                                     if selected_content_data['props'][req_name]['type'] == 'int':
                                         if selected_content_data['props'][req_name]['units'] != 'bytes':
@@ -1308,10 +1316,10 @@
     
     
     if batch_id == '':
-        raise 'unsupported function'
+        raise Exception, 'unsupported function'
     else:
         invalidate_storage_report(request.SESSION, storagename)
-        return batch_id;
+        return batch_id
 
 
 def get_storage_batch_result(self, 
@@ -1328,7 +1336,7 @@
         # ricci down
         error   = True
         url     = url
-        msg     = 'Unable to contact ' + storagename
+        msg     = 'Unable to contact %s' % storagename
     else:
         batch = 'no batch'
         try:
@@ -1338,12 +1346,13 @@
         if batch == 'no batch':
             error = True
             url   = url
-            msg   = 'Ricci on ' + storagename + ' responded with error. No detailed info available.'
+            msg   = 'Ricci on %s responded with error. No detailed info available.' % storagename
         elif batch == None:
             # no such batch
             error     = False
             completed = True
-            url      += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
+            url       = '%s?%s=%s&%s=%s' \
+                % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
             msg       = 'No such batch'
         else:
             DEFAULT_ERROR = 'extract_module_status() failed'
@@ -1354,8 +1363,9 @@
                 pass
             if code == DEFAULT_ERROR:
                 error = True
-                url  += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg   = 'Ricci on ' + storagename + ' sent malformed response'
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg   = 'Ricci on %s sent a malformed response' % storagename
             elif code == -101 or code == -102:
                 # in progress
                 error     = False
@@ -1364,23 +1374,27 @@
             elif code == -103:
                 # module removed from scheduler
                 error = True
-                url  += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg   = 'Ricci on ' + storagename + ' removed request from scheduler. File bug report against ricci.' 
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg   = 'Ricci on %s removed request from scheduler. File bug report against ricci.' % storagename
             elif code == -104:
                 # module failure
                 error = True
-                url  += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg   = 'Ricci on ' + storagename + ' failed to execute storage module; reinstall it.'
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg   = 'Ricci on %s failed to execute storage module; reinstall it.' % storagename
             elif code == -2:
                 # API error
                 error = True
-                url  += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg   = 'Luci server used invalid API to communicate with ' + storagename + '. File a bug report against luci.'
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg   = 'Luci server used invalid API to communicate with %s. File a bug report against luci.' % storagename
             elif code == -1:
                 # undefined error
                 error = True
-                url  += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg   = 'Reason for failure (as reported by ' + storagename + '): ' + err_msg
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg   = 'Reason for failure (as reported by %s): %s' % (storagename, err_msg)
             elif code == 0:
                 # no error
                 error     = False
@@ -1393,41 +1407,49 @@
             elif code == 1:
                 # mid-air
                 error  = True
-                url   += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg    = 'Mid-Air collision (storage on ' + storagename + ' has changed since last probe). '
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg    = 'Mid-Air collision (storage on %s has changed since last probe).' % storagename
             elif code == 2:
                 # validation error
                 error  = True
-                url   += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
                 msg    = 'Validation error. File bug report against Luci.'
             elif code == 3:
                 # unmount error
                 error  = True
-                url   += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg    = 'Unmount failure: ' + err_msg
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg    = 'Unmount failure: %s' % err_msg
             elif code == 4:
                 # clvmd error
                 error  = True
-                url   += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg    = 'clvmd (clustered LVM daemon) is not running on ' + storagename
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg    = 'clvmd (clustered LVM daemon) is not running on %s' % storagename
             elif code == 5:
                 # not quorate
                 error  = True
-                url   += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
                 msg    = 'Cluster quorum is required, and yet cluster is not quorate. Start cluster, and try again.'
             elif code == 6:
                 # LVM cluster locking not enabled
                 error  = True
-                url   += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg    = 'LVM cluster locking is not enabled on ' + storagename
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg    = 'LVM cluster locking is not enabled on %s' % storagename
             elif code == 7:
                 # cluster not running
                 error  = True
-                url   += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg    = 'Cluster infrastructure is not running on ' + storagename
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg    = 'Cluster infrastructure is not running on %s' % storagename
             elif code > 8:
                 error  = True
-                url   += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
                 msg    = err_msg
     
     return {'error'        : error,
@@ -1452,21 +1474,21 @@
             if node.nodeName == 'module':
                 module_r = node
     if module_r == None:
-        raise 'missing <module/> in <batch/>'
+        raise Exception, 'missing <module/> in <batch/>'
     resp_r = None
     for node in module_r.childNodes:
         if node.nodeType == xml.dom.Node.ELEMENT_NODE:
             if node.nodeName == RESPONSE_TAG:
                 resp_r = node
     if resp_r == None:
-        raise 'missing <response/> in <module/>'
+        raise Exception, 'missing <response/> in <module/>'
     fr_r = None
     for node in resp_r.childNodes:
         if node.nodeType == xml.dom.Node.ELEMENT_NODE:
             if node.nodeName == FUNC_RESP_TAG:
                 fr_r = node
     if fr_r == None:
-        raise 'missing <function_response/> in <response/>'
+        raise Exception, 'missing <function_response/> in <response/>'
     vars = {}
     for node in fr_r.childNodes:
         try:
@@ -1489,26 +1511,24 @@
         bd_path     = bd.getAttribute('path')
         mapper_type = bd.getAttribute('mapper_type')
         mapper_id   = bd.getAttribute('mapper_id')
-    
-    url  = main_url + '?'
-    url += STONAME + '=' + storagename
+
+    url_list = list()
+    url_list.append('%s?%s=%s' % (main_url, STONAME, storagename))
     if mapper_type != '':
-        url += '&' + PT_MAPPER_TYPE + '=' + mapper_type
+        url_list.append('&%s=%s' % (PT_MAPPER_TYPE, mapper_type))
     if mapper_id != '':
-        url += '&' + PT_MAPPER_ID + '=' + mapper_id
+        url_list.append('&%s=%s' % (PT_MAPPER_ID, mapper_id))
     if bd_path != '':
-        url += '&' + PT_PATH + '=' + bd_path
+        url_list.append('&%s=%s' % (PT_PATH, bd_path))
     
     if mapper_type == '':
-        url += '&' + PAGETYPE + '=' + STORAGE
+        url_list.append('&%s=%s' % (PAGETYPE, STORAGE))
     elif bd_path != '':
-        url += '&' + PAGETYPE + '=' + VIEW_BD
+        url_list.append('&%s=%s' % (PAGETYPE, VIEW_BD))
     else:
-        url += '&' + PAGETYPE + '=' + VIEW_MAPPER
-    
-    return url
-                        
-                        
+        url_list.append('&%s=%s' % (PAGETYPE, VIEW_MAPPER))
+
+    return ''.join(url_list)
 
 
 def get_bd_data_internal(session, bd_xml, mapper_xml):
@@ -1527,11 +1547,12 @@
     color = 'black'
     
     size_in_units, units = bytes_to_value_units(props['size']['value'])
-    description = str(size_in_units) + ' ' + units
-    
+
+    description = None
     if mapper_type == MAPPER_SYS_TYPE:
         if 'scsi_id' in props:
-            description += ', SCSI ID = ' + props['scsi_id']['value']
+            description = '%s %s, SCSI ID = %s' \
+                % (size_in_units, units, props['scsi_id']['value'])
             icon_name = 'icon_bd_scsi.png'
     elif mapper_type == MAPPER_VG_TYPE:
         pretty_name = props['lvname']['value']
@@ -1539,13 +1560,17 @@
         if props['snapshot']['value'] == 'true':
             icon_name = 'icon_bd_LV_snapshot.png'
             pretty_type = 'Snapshot'
+
+    if description is None:
+        description = '%s %s' % (size_in_units, units)
     
     if bd_xml.nodeName == BD_TEMPLATE:
-        path = 'unused_segment'
         if mapper_type == MAPPER_PT_TYPE:
-            path += '_' + props['partition_begin']['value']
-            path += '_' + props['partition_type']['value']
-        pretty_type = 'New ' + pretty_type
+            path = 'unused_segment_%s_%s' \
+                % (props['partition_begin']['value'], props['partition_type']['value'])
+        else:
+            path = 'unused_segment'
+        pretty_type = 'New %s' % pretty_type
         pretty_name = 'Unused Space'
         data['new'] = True
     else:
@@ -1574,7 +1599,8 @@
     actions = []
     if removable:
         action = {'name' : 'Remove',
-                  'msg'  : 'Are you sure you want to remove ' + pretty_type + ' \\\'' + pretty_name + '\\\'?',
+                  'msg'  : 'Are you sure you want to remove %s \\\'%s\\\'?' \
+                     % (pretty_type, pretty_name),
                   'link' : ''}
         actions.append(action)
     if data['mapper_type'] == MAPPER_VG_TYPE and not data['new']:
@@ -1594,7 +1620,11 @@
                 if pretty_name in origs:
                     action = {'name' : 'Take Snapshot',
                               'msg'  : '', 
-                              'link' : './?' + PAGETYPE + '=' + VIEW_BD + '&' + PT_MAPPER_ID + '=' + data['mapper_id'] + '&' + PT_MAPPER_TYPE + '=' + data['mapper_type'] + '&' + PT_PATH + '=' + snap_lv['path']}
+                              'link' : './?%s=%s&%s=%s&%s=%s&%s=%s' \
+                                % (PAGETYPE, VIEW_BD, \
+                                   PT_MAPPER_ID, data['mapper_id'], \
+                                   PT_MAPPER_TYPE, data['mapper_type'], \
+                                   PT_PATH, snap_lv['path'])}
                     actions.append(action)
     data['actions'] = actions
     
@@ -1675,10 +1705,13 @@
         elif type == VARIABLE_TYPE_LIST_INT or type == VARIABLE_TYPE_LIST_STR:
             d_type = 'label'
             d_value = ''
+            d_val_list = list()
             for node in var.childNodes:
                 if node.nodeType == xml.dom.Node.ELEMENT_NODE:
                     if node.nodeName == VARIABLE_TYPE_LISTENTRY:
-                        d_value += node.getAttribute('value') + ', '
+                        d_val_list.append(node.getAttribute('value'))
+                        d_val_list.append(', ')
+            d_value = ''.join(d_val_list)
             if d_value != '':
                 d_value = d_value[:len(d_value)-2]
         elif type == 'hidden':
@@ -1811,7 +1844,7 @@
         old_props = d['props']
         new_props = {}
         for name in old_props:
-            new_name = 'content_variable_' + d['id'] + '_' + name
+            new_name = 'content_variable_%s_%s' % (d['id'], name)
             new_props[new_name] = old_props[name]
             new_props[new_name]['name'] = new_name
         d['props'] = new_props
@@ -1852,14 +1885,14 @@
     id = c_xml.getAttribute('type')
     if id == CONTENT_FS_TYPE:
         fs_type = c_xml.getAttribute('fs_type')
-        id += '_' + fs_type
+        id = '%s_%s' % (id, fs_type)
         name = get_pretty_fs_name(fs_type)
     elif id == CONTENT_NONE_TYPE:
         name = 'Empty'
     elif id == CONTENT_MS_TYPE:
         mapper_type = c_xml.getAttribute('mapper_type')
         mapper_id = c_xml.getAttribute('mapper_id')
-        id += '_' + mapper_type + '_' + mapper_id.replace(':', '__colon__')
+        id = '%s_%s_%s' % (id, mapper_type, mapper_id.replace(':', '__colon__'))
         if mapper_type == MAPPER_SYS_TYPE:
             pass
         elif mapper_type == MAPPER_VG_TYPE:
@@ -1877,7 +1910,7 @@
         elif mapper_type == MAPPER_iSCSI_TYPE:
             pass
         else:
-            name = 'Source of ' + mapper_type
+            name = 'Source of %s' % mapper_type
     elif id == CONTENT_HIDDEN_TYPE:
         name = 'Extended Partition'
     else:
@@ -1933,7 +1966,7 @@
                  'color_css'  : '#0192db', 
                  'description': mapper_data['pretty_targets_name']}
     if mapper_data['mapper_type'] == MAPPER_PT_TYPE:
-        upper_cyl['description'] = 'Physical ' + upper_cyl['description']
+        upper_cyl['description'] = 'Physical %s' % upper_cyl['description']
     
     offset = 0
     for t in mapper_data['targets_all']:
@@ -1963,7 +1996,7 @@
     
     # build highlights
     for d in upper_cyl['cyls']:
-        h_id = d['id'] + '_selected'
+        h_id = '%s_selected' % d['id']
         beg = d['beg']
         end = d['end']
         upper_cyl['highs'].append({'beg'  : beg, 
@@ -1980,22 +2013,22 @@
         if bd['mapper_type'] == MAPPER_VG_TYPE and not bd['new']:
             if 'origin' in bd:
                 # snapshot
-                snap_id = bd['path'] + '_snapshot'
+                snap_id = '%s_snapshot' % bd['path']
                 upper_cyl['highs'].append({'beg'  : beg, 
                                            'end'  : end, 
                                            'id'   : snap_id,
                                            'type' : 'snapshot'})
                 orig = bd['origin']
-                high_list[d['id']].append(orig['path'] + '_origin')
+                high_list[d['id']].append('%s_origin' % orig['path'])
                 high_list[d['id']].append(snap_id)
             if 'snapshots' in bd:
                 # origin
                 upper_cyl['highs'].append({'beg'  : beg, 
                                            'end'  : end, 
-                                           'id'   : bd['path'] + '_origin',
+                                           'id'   : '%s_origin' % bd['path'],
                                            'type' : 'snapshot-origin'})
                 for snap in bd['snapshots']:
-                    high_list[d['id']].append(snap['path'] + '_snapshot')
+                    high_list[d['id']].append('%s_snapshot', snap['path'])
                     
         
         
@@ -2025,7 +2058,7 @@
         offset = end
     
     if mapper_data['mapper_type'] == MAPPER_PT_TYPE:
-        lower_cyl['description'] = 'Logical ' + mapper_data['pretty_targets_name']
+        lower_cyl['description'] = 'Logical %s' % mapper_data['pretty_targets_name']
         lower_cyl['cyls']        = []
         lower_cyl['color']       = 'blue'
         lower_cyl['color_css']   = '#0192db'
@@ -2065,7 +2098,7 @@
     
     # build highlights
     for d in lower_cyl['cyls']:
-        h_id = d['id'] + '_selected'
+        h_id = '%s_selected' % d['id']
         beg = d['beg']
         end = d['end']
         lower_cyl['highs'].append({'beg'  : beg, 
--- conga/luci/site/luci/Extensions/Variable.py	2006/10/16 07:39:27	1.4
+++ conga/luci/site/luci/Extensions/Variable.py	2007/05/03 20:16:38	1.4.8.1
@@ -1,15 +1,12 @@
-
 import xml.dom
 
-from ricci_defines import *
-
-
+from ricci_defines import VARIABLE_TAG, VARIABLE_TYPE_BOOL, VARIABLE_TYPE_FLOAT, VARIABLE_TYPE_INT, VARIABLE_TYPE_INT_SEL, VARIABLE_TYPE_LISTENTRY, VARIABLE_TYPE_LIST_INT, VARIABLE_TYPE_LIST_STR, VARIABLE_TYPE_LIST_XML, VARIABLE_TYPE_STRING, VARIABLE_TYPE_STRING_SEL, VARIABLE_TYPE_XML
 
 def parse_variable(node):
     if node.nodeType != xml.dom.Node.ELEMENT_NODE:
-        raise 'not a variable'
+        raise Exception, 'not a variable'
     if node.nodeName != str(VARIABLE_TAG):
-        raise 'not a variable'
+        raise Exception, 'not a variable'
     
     attrs_dir = {}
     attrs = node.attributes
@@ -18,9 +15,9 @@
         attrValue = attrNode.nodeValue
         attrs_dir[attrName.strip()] = attrValue
     if ('name' not in attrs_dir) or ('type' not in attrs_dir):
-        raise 'incomplete variable'
+        raise Exception, 'incomplete variable'
     if (attrs_dir['type'] != VARIABLE_TYPE_LIST_INT and attrs_dir['type'] != VARIABLE_TYPE_LIST_STR and attrs_dir['type'] != VARIABLE_TYPE_LIST_XML and attrs_dir['type'] != VARIABLE_TYPE_XML) and ('value' not in attrs_dir):
-        raise 'incomplete variable'
+        raise Exception, 'incomplete variable'
     
     mods = {}
     for mod in attrs_dir:
@@ -42,7 +39,7 @@
             else:
                 continue
             if v == None:
-                raise 'invalid listentry'
+                raise Exception, 'invalid listentry'
             value.append(v)
         return VariableList(attrs_dir['name'], value, mods, VARIABLE_TYPE_LIST_STR)
     elif attrs_dir['type'] == VARIABLE_TYPE_LIST_XML:
@@ -61,7 +58,7 @@
     elif attrs_dir['type'] == VARIABLE_TYPE_INT_SEL:
         value = int(attrs_dir['value'])
         if 'valid_values' not in mods:
-            raise 'missing valid_values'
+            raise Exception, 'missing valid_values'
     elif attrs_dir['type'] == VARIABLE_TYPE_FLOAT:
         value = float(attrs_dir['value'])
     elif attrs_dir['type'] == VARIABLE_TYPE_STRING:
@@ -69,11 +66,11 @@
     elif attrs_dir['type'] == VARIABLE_TYPE_STRING_SEL:
         value = attrs_dir['value']
         if 'valid_values' not in mods:
-            raise 'missing valid_values'
+            raise Exception, 'missing valid_values'
     elif attrs_dir['type'] == VARIABLE_TYPE_BOOL:
         value = (attrs_dir['value'] == 'true')
     else:
-        raise 'invalid variable'
+        raise Exception, 'invalid variable'
     
     return Variable(attrs_dir['name'], value, mods)
 
@@ -85,7 +82,7 @@
         self.__name = str(name)
         self.__mods = mods
         self.set_value(value)
-    
+
     def get_name(self):
         return self.__name
     
@@ -105,7 +102,7 @@
             self.__value = float(value)
             
         elif self.__is_list(value):
-            raise "lists not implemented"
+            raise Exception, "lists not implemented"
             if self.__is_int(value[0]):
                 self.__type = VARIABLE_TYPE_LIST_INT
                 self.__value = value
@@ -113,7 +110,7 @@
                 self.__type = VARIABLE_TYPE_LIST_STR
                 self.__value = value
             else:
-                raise "not valid list type"
+                raise Exception, "not valid list type"
         elif self.__is_xml(value):
             self.__type = VARIABLE_TYPE_XML
             self.__value = value
@@ -151,7 +148,7 @@
             else:
                 elem.setAttribute('value', str(self.__value))
         else:
-            raise "lists not implemented"
+            raise Exception, "lists not implemented"
             l = self.__value
             for i in range(len(l)):
                 x = l[i]
@@ -176,7 +173,7 @@
             elif self.__is_string(value[0]):
                 return VARIABLE_TYPE_LIST_STR
             else:
-                raise "not valid list type"
+                raise Exception, "not valid list type"
         elif self.__is_xml(value):
             return VARIABLE_TYPE_XML
         else:
@@ -229,9 +226,9 @@
     
     def __init__(self, name, value, mods, list_type):
         if list_type != VARIABLE_TYPE_LIST_STR and list_type != VARIABLE_TYPE_LIST_XML:
-            raise 'invalid list type'
+            raise Exception, 'invalid list type'
         #if ! self.__is_list(value):
-        #    raise 'value not a list'
+        #    raise Exception, 'value not a list'
         self.__name = name
         self.__mods = mods
         self.__type = list_type
@@ -244,7 +241,7 @@
     def get_value(self):
         return self.__value
     def set_value(self, value):
-        raise 'VariableList.set_value() not implemented'
+        raise Exception, 'VariableList.set_value() not implemented'
     
     def type(self):
         return self.__type
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/05/03 19:51:21	1.255
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/05/03 20:16:38	1.255.2.1
@@ -1,111 +1,37 @@
-import socket
-from ModelBuilder import ModelBuilder
 from xml.dom import minidom
 import AccessControl
 from conga_constants import *
-from ricci_bridge import *
+import RicciQueries as rq
 from ricci_communicator import RicciCommunicator, RicciError, batch_status, extract_module_status
-import time
-import Products.ManagedSystem
-from Products.Archetypes.utils import make_uuid
-from Ip import Ip
-from Clusterfs import Clusterfs
-from Fs import Fs
-from FailoverDomain import FailoverDomain
-from FailoverDomainNode import FailoverDomainNode
-from RefObject import RefObject
-from ClusterNode import ClusterNode
-from NFSClient import NFSClient
-from NFSExport import NFSExport
-from Service import Service
-from Lockserver import Lockserver
-from Netfs import Netfs
-from Apache import Apache
-from MySQL import MySQL
-from Postgres8 import Postgres8
-from Tomcat5 import Tomcat5
-from OpenLDAP import OpenLDAP
-from Vm import Vm
-from FenceXVMd import FenceXVMd
-from Script import Script
-from Samba import Samba
-from LVM import LVM
-from QuorumD import QuorumD
-from Heuristic import Heuristic
-from clusterOS import resolveOSType
-from Fence import Fence
-from Method import Method
-from Totem import Totem
-from Device import Device
-from FenceHandler import validateNewFenceDevice, FENCE_OPTS, validateFenceDevice, validate_fenceinstance
-from GeneralError import GeneralError
-from homebase_adapters import manageCluster, createClusterSystems, havePermCreateCluster, setNodeFlag, delNodeFlag, userAuthenticated, getStorageNode, getClusterNode, delCluster, parseHostForm
+
+from ClusterModel.ModelBuilder import ModelBuilder
+from ClusterModel.FailoverDomain import FailoverDomain
+from ClusterModel.FailoverDomainNode import FailoverDomainNode
+from ClusterModel.RefObject import RefObject
+from ClusterModel.ClusterNode import ClusterNode
+from ClusterModel.Service import Service
+from ClusterModel.Lockserver import Lockserver
+from ClusterModel.Vm import Vm
+from ClusterModel.FenceXVMd import FenceXVMd
+from ClusterModel.QuorumD import QuorumD
+from ClusterModel.Heuristic import Heuristic
+from ClusterModel.Fence import Fence
+from ClusterModel.Method import Method
+from ClusterModel.GeneralError import GeneralError
+
+from HelperFunctions import resolveOSType
 from LuciSyslog import LuciSyslog
-from system_adapters import validate_svc_update
+from ResourceHandler import create_resource
+from FenceHandler import validateNewFenceDevice, FENCE_OPTS, validateFenceDevice, validate_fenceinstance, FD_VAL_FAIL, FD_VAL_SUCCESS
 
-#Policy for showing the cluster chooser menu:
-#1) If there are no clusters in the ManagedClusterSystems
-#folder, then only the admin user may see this menu, and
-#the configure option should not be displayed.
-#2)If there are clusters in the ManagedClusterSystems,
-#then only display chooser if the current user has
-#permissions on at least one. If the user is admin, show ALL clusters
+from system_adapters import validate_svc_update
+from homebase_adapters import manageCluster, createClusterSystems, havePermCreateCluster, setNodeFlag, delNodeFlag, userAuthenticated, getStorageNode, getClusterNode, delCluster, parseHostForm
 
 try:
 	luci_log = LuciSyslog()
 except:
 	pass
 
-def get_fsid_list(model):
-	obj_list = model.searchObjectTree('fs')
-	obj_list.extend(model.searchObjectTree('clusterfs'))
-	return map(lambda x: x.getAttribute('fsid') and int(x.getAttribute('fsid')) or 0, obj_list)
-
-def fsid_is_unique(model, fsid):
-	fsid_list = get_fsid_list(model)
-	return fsid not in fsid_list
-
-def generate_fsid(model, name):
-	import binascii
-	from random import random
-	fsid_list = get_fsid_list(model)
-
-	fsid = binascii.crc32(name) & 0xffff
-	dupe = fsid in fsid_list
-	while dupe is True:
-		fsid = (fsid + random.randrange(1, 0xfffe)) & 0xffff
-		dupe = fsid in fsid_list
-	return fsid
-
-def buildClusterCreateFlags(self, batch_map, clusterName):
-	path = str(CLUSTER_FOLDER_PATH + clusterName)
-
-	try:
-		clusterfolder = self.restrictedTraverse(path)
-	except Exception, e:
-		luci_log.debug_verbose('buildCCF0: no cluster folder at %s' % path)
-		return None
-
-	for key in batch_map.keys():
-		try:
-			key = str(key)
-			batch_id = str(batch_map[key])
-			#This suffix needed to avoid name collision
-			objname = str(key + "____flag")
-
-			clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-			#now designate this new object properly
-			objpath = str(path + "/" + objname)
-			flag = self.restrictedTraverse(objpath)
-
-			flag.manage_addProperty(BATCH_ID, batch_id, "string")
-			flag.manage_addProperty(TASKTYPE, CLUSTER_ADD, "string")
-			flag.manage_addProperty(FLAG_DESC, "Creating node " + key + " for cluster " + clusterName, "string")
-			flag.manage_addProperty(LAST_STATUS, 0, "int")
-		except Exception, e:
-			luci_log.debug_verbose('buildCCF1: error creating flag for %s: %s' \
-				% (key, str(e)))
-
 def parseClusterNodes(self, request, cluster_os):
 	check_certs = False
 	try:
@@ -213,7 +139,7 @@
 				except Exception, e:
 					luci_log.debug_verbose('PCN3: %s: %s' % (cur_host, str(e)))
 
-				errors.append('%s reports it is a member of cluster \"%s\"' \
+				errors.append('%s reports it is a member of cluster "%s"' \
 					% (cur_host, cur_cluster_name))
 				luci_log.debug_verbose('PCN4: %s: already in %s cluster' \
 					% (cur_host, cur_cluster_name))
@@ -307,7 +233,7 @@
 		return (False, { 'errors': errors, 'messages': messages })
 
 	node_list = add_cluster['nodes'].keys()
-	batchNode = createClusterBatch(add_cluster['cluster_os'],
+	batchNode = rq.createClusterBatch(add_cluster['cluster_os'],
 					clusterName,
 					clusterName,
 					node_list,
@@ -350,7 +276,7 @@
 		except Exception, e:
 			luci_log.debug_verbose('validateCreateCluster0: %s: %s' \
 				% (i, str(e)))
-			errors.append('An error occurred while attempting to add cluster node \"%s\"' % i)
+			errors.append('An error occurred while attempting to add cluster node "%s"' % i)
 			if len(batch_id_map) == 0:
 				request.SESSION.set('create_cluster', add_cluster)
 				return (False, { 'errors': errors, 'messages': messages })
@@ -358,9 +284,11 @@
 
 	buildClusterCreateFlags(self, batch_id_map, clusterName)
 	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clusterName + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (request['URL'], CLUSTER_CONFIG, clusterName))
 
 def validateAddClusterNode(self, request):
+	import time
 	try:
 		request.SESSION.delete('add_node')
 	except:
@@ -399,7 +327,7 @@
 	if cluster_os is None:
 		cluster_folder = None
 		try:
-			cluster_folder = self.restrictedTraverse(str(CLUSTER_FOLDER_PATH + clusterName))
+			cluster_folder = self.restrictedTraverse('%s%s' % (CLUSTER_FOLDER_PATH, clusterName))
 			if not cluster_folder:
 				raise Exception, 'cluster DB object is missing'
 		except Exception, e:
@@ -509,7 +437,7 @@
 				except Exception, e:
 					luci_log.debug_verbose('VACN6: %s: %s' % (cur_host, str(e)))
 
-				errors.append('%s reports it is already a member of cluster \"%s\"' % (cur_host, cur_cluster_name))
+				errors.append('%s reports it is already a member of cluster "%s"' % (cur_host, cur_cluster_name))
 				luci_log.debug_verbose('VACN7: %s: already in %s cluster' \
 					% (cur_host, cur_cluster_name))
 				continue
@@ -581,8 +509,7 @@
 			i = system_list[x]
 
 			try:
-				batch_node = addClusterNodeBatch(cluster_os,
-								clusterName,
+				batch_node = rq.addClusterNodeBatch(clusterName,
 								True,
 								True,
 								shared_storage,
@@ -603,7 +530,7 @@
 				except Exception, e:
 					luci_log.debug_verbose('VACN12: %s: %s' % (cur_host, str(e)))
 
-				errors.append('Unable to initiate cluster join for %s' % cur_host)
+				errors.append('Unable to initiate cluster join for node "%s"' % cur_host)
 				luci_log.debug_verbose('VACN13: %s: %s' % (cur_host, str(e)))
 				continue
 
@@ -625,7 +552,7 @@
 		if not conf_str:
 			raise Exception, 'Unable to save the new cluster model.'
 
-		batch_number, result = setClusterConf(cluster_ricci, conf_str)
+		batch_number, result = rq.setClusterConf(cluster_ricci, conf_str)
 		if not batch_number or not result:
 			raise Exception, 'batch or result is None'
 	except Exception, e:
@@ -638,7 +565,7 @@
 	# abort the whole process.
 	try:
 		while True:
-			batch_ret = checkBatch(cluster_ricci, batch_number)
+			batch_ret = rq.checkBatch(cluster_ricci, batch_number)
 			code = batch_ret[0]
 			if code == True:
 				break
@@ -696,7 +623,7 @@
 
 		if not success:
 			incomplete = True
-			errors.append('An error occurred while attempting to add cluster node \"%s\"' % cur_host)
+			errors.append('An error occurred while attempting to add cluster node "%s"' % cur_host)
 
 	if incomplete or len(errors) > 0:
 		request.SESSION.set('add_node', add_cluster)
@@ -705,7 +632,8 @@
 	buildClusterCreateFlags(self, batch_id_map, clusterName)
 
 	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clusterName + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (request['URL'], CLUSTER_CONFIG, clusterName))
 
 def validateServiceAdd(self, request):
 	errors = list()
@@ -770,12 +698,10 @@
 		try:
 			res_type = dummy_form['type'].strip()
 			if not res_type:
-				raise Exception, 'no resource type was given'
-			if not res_type in resourceAddHandler:
-				raise Exception, 'invalid resource type: %s' % res_type
+				raise Exception, 'no resource type'
 		except Exception, e:
 			luci_log.debug_verbose('vSA3: %s' % str(e))
-			return (False, {'errors': [ 'An invalid resource type was specified' ]})
+			return (False, {'errors': [ 'No resource type was specified' ]})
 
 		try:
 			if res_type == 'ip':
@@ -790,7 +716,7 @@
 				resObj = RefObject(newRes)
 				resObj.setRef(newRes.getName())
 			else:
-				resObj = resourceAddHandler[res_type](request, dummy_form)[0]
+				resObj = create_resource(res_type, dummy_form, model)
 		except Exception, e:
 			resObj = None
 			luci_log.debug_verbose('vSA4: type %s: %s' % (res_type, str(e)))
@@ -817,7 +743,7 @@
 			recovery = None
 		else:
 			if recovery != 'restart' and recovery != 'relocate' and recovery != 'disable':
-				errors.append('You entered an invalid recovery option: \"%s\" Valid options are \"restart\" \"relocate\" and \"disable\"')
+				errors.append('You entered an invalid recovery option: "%s" Valid options are "restart" "relocate" and "disable."')
 	except:
 		recovery = None
 
@@ -919,7 +845,7 @@
 			luci_log.debug_verbose('vAS7: missing ricci hostname')
 			raise Exception, 'unknown ricci agent hostname'
 
-		batch_number, result = setClusterConf(rc, str(conf))
+		batch_number, result = rq.setClusterConf(rc, str(conf))
 		if batch_number is None or result is None:
 			luci_log.debug_verbose('vAS8: missing batch_number or result')
 			raise Exception, 'unable to save the new cluster configuration.'
@@ -929,14 +855,15 @@
 
 	try:
 		if request.form['action'] == 'edit':
-			set_node_flag(self, clustername, ragent, str(batch_number), SERVICE_CONFIG, "Configuring service \'%s\'" % service_name)
+			set_node_flag(self, clustername, ragent, str(batch_number), SERVICE_CONFIG, 'Configuring service "%s"' % service_name)
 		else:
-			set_node_flag(self, clustername, ragent, str(batch_number), SERVICE_ADD, "Adding new service \'%s\'" % service_name)
+			set_node_flag(self, clustername, ragent, str(batch_number), SERVICE_ADD, 'Creating service "%s"' % service_name)
 	except Exception, e:
 		luci_log.debug_verbose('vAS10: failed to set flags: %s' % str(e))
 
 	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + SERVICES + "&clustername=" + clustername + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (request['URL'], SERVICES, clustername))
 
 def validateResourceAdd(self, request):
 	try:
@@ -944,31 +871,41 @@
 		if not res_type:
 			raise KeyError, 'type is blank'
 	except Exception, e:
-		luci_log.debug_verbose('resourceAdd: type is blank')
+		luci_log.debug_verbose('VRA0: type is blank')
 		return (False, {'errors': ['No resource type was given.']})
 
+	try:
+		model = request.SESSION.get('model')
+	except Exception, e:
+		luci_log.debug_verbose('VRA1: no model: %s' % str(e))
+		return None
+	
 	errors = list()
 	try:
-		res = resourceAddHandler[res_type](request)
-		if res is None or res[0] is None or res[1] is None:
-			if res and res[2]:
-				errors.extend(res[2])
-			raise Exception, 'An error occurred while adding this resource'
-		model = res[1]
-		newres = res[0]
-		addResource(self, request, model, newres, res_type)
+		res = create_resource(res_type, request.form, model)
 	except Exception, e:
-		if len(errors) < 1:
-			errors.append('An error occurred while adding this resource')
+		errors.extend(e)
+
+	if len(errors) < 1:
+		try:
+			addResource(self, request, model, res)
+		except Exception, e:
+			errors.append('An error occurred while adding resource "%s"' \
+				% res.getName())
+	if len(errors) > 0:
+		errors.append('An error occurred while adding this resource')
 		luci_log.debug_verbose('resource error: %s' % str(e))
 		return (False, {'errors': errors})
 
+
 	return (True, {'messages': ['Resource added successfully']})
 
+
 ## Cluster properties form validation routines
 
 # rhel5 cluster version
 def validateMCastConfig(model, form):
+	import socket
 	try:
 		gulm_ptr = model.getGULMPtr()
 		if gulm_ptr:
@@ -1128,7 +1065,7 @@
 			if hint < 1:
 				raise ValueError, 'Heuristic interval values must be greater than 0'
 		except KeyError, e:
-			errors.append('No interval was given for heuristic #%d' % i + 1)
+			errors.append('No interval was given for heuristic %d' % i + 1)
 		except ValueError, e:
 			errors.append('An invalid interval was given for heuristic %d: %s' \
 				% (i + 1, str(e)))
@@ -1232,9 +1169,7 @@
 
 	totem = model.getTotemPtr()
 	if totem is None:
-		cp = model.getClusterPtr()
-		totem = Totem()
-		cp.addChild(totem)
+		totem = model.addTotemPtr()
 
 	try:
 		token = form['token'].strip()
@@ -1491,7 +1426,7 @@
       % clustername)
 
   if rc:
-    batch_id, result = setClusterConf(rc, str(conf_str))
+    batch_id, result = rq.setClusterConf(rc, str(conf_str))
     if batch_id is None or result is None:
       luci_log.debug_verbose('VCC7: setCluserConf: batchid or result is None')
       errors.append('Unable to propagate the new cluster configuration for %s' \
@@ -1508,7 +1443,8 @@
     return (retcode, {'errors': errors, 'messages': messages})
 
   response = request.RESPONSE
-  response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername + '&busyfirst=true')
+  response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+	% (request['URL'], CLUSTER_CONFIG, clustername))
 
 def validateFenceAdd(self, request):
   errors = list()
@@ -1580,7 +1516,7 @@
         % clustername)
 
     if rc:
-      batch_id, result = setClusterConf(rc, str(conf_str))
+      batch_id, result = rq.setClusterConf(rc, str(conf_str))
       if batch_id is None or result is None:
         luci_log.debug_verbose('VFA: setCluserConf: batchid or result is None')
         errors.append('Unable to propagate the new cluster configuration for %s' \
@@ -1588,11 +1524,11 @@
       else:
         try:
           set_node_flag(self, clustername, rc.hostname(), batch_id,
-            CLUSTER_CONFIG, 'Adding new fence device \"%s\"' % retobj)
+            CLUSTER_CONFIG, 'Adding new fence device "%s"' % retobj)
         except:
           pass
 
-    response.redirect(request['URL'] + "?pagetype=" + FENCEDEV + "&clustername=" + clustername + "&fencename=" + retobj + '&busyfirst=true')
+    response.redirect('%s?pagetype=%s&clustername=%s&fencename=%s&busyfirst=true' % (request['URL'], FENCEDEV, clustername, retobj))
   else:
     errors.extend(retobj)
     return (False, {'errors': errors, 'messages': messages})
@@ -1672,7 +1608,7 @@
           % clustername)
 
     if rc:
-      batch_id, result = setClusterConf(rc, str(conf_str))
+      batch_id, result = rq.setClusterConf(rc, str(conf_str))
       if batch_id is None or result is None:
         luci_log.debug_verbose('VFA: setClusterConf: batchid or result is None')
         errors.append('Unable to propagate the new cluster configuration for %s' \
@@ -1680,11 +1616,11 @@
       else:
         try:
           set_node_flag(self, clustername, rc.hostname(), batch_id,
-            CLUSTER_CONFIG, 'Updating fence device \"%s\"' % retobj)
+            CLUSTER_CONFIG, 'Updating fence device "%s"' % retobj)
         except:
           pass
 
-    response.redirect(request['URL'] + "?pagetype=" + FENCEDEV + "&clustername=" + clustername + "&fencename=" + retobj + '&busyfirst=true')
+    response.redirect('%s?pagetype=%s&clustername=%s&fencename=%s&busyfirst=true' % (request['URL'], FENCEDEV, clustername, retobj))
   else:
     errors.extend(retobj)
     return (False, {'errors': errors, 'messages': messages})
@@ -1878,7 +1814,7 @@
 
 					# Add back the tags under the method block
 					# for the fence instance
-					if fence_type == 'fence_manual':
+					if type == 'fence_manual':
 						instance_list.append({'name': fencedev_name, 'nodename': nodename })
 					else:
 						instance_list.append({'name': fencedev_name })
@@ -1895,7 +1831,7 @@
 			# so the appropriate XML goes into the <method> block inside
 			# <node><fence>. All we need for that is the device name.
 			if not 'sharable' in fence_form:
-				if fence_type == 'fence_manual':
+				if type == 'fence_manual':
 					instance_list.append({'name': fencedev_name, 'nodename': nodename })
 				else:
 					instance_list.append({'name': fencedev_name })
@@ -1938,7 +1874,7 @@
 		conf = str(model.exportModelAsString())
 		if not conf:
 			raise Exception, 'model string is blank'
-		luci_log.debug_verbose('vNFC16: exported \"%s\"' % conf)
+		luci_log.debug_verbose('vNFC16: exported "%s"' % conf)
 	except Exception, e:
 		luci_log.debug_verbose('vNFC17: exportModelAsString failed: %s' \
 			% str(e))
@@ -1950,7 +1886,7 @@
 		return (False, {'errors': ['Unable to find a ricci agent for the %s cluster' % clustername ]})
 	ragent = rc.hostname()
 
-	batch_number, result = setClusterConf(rc, conf)
+	batch_number, result = rq.setClusterConf(rc, conf)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('vNFC19: missing batch and/or result')
 		return (False, {'errors': [ 'An error occurred while constructing the new cluster configuration.' ]})
@@ -1961,7 +1897,7 @@
 		luci_log.debug_verbose('vNFC20: failed to set flags: %s' % str(e))
 
 	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + NODE + "&clustername=" + clustername + '&nodename=' + nodename + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&nodename=%s&busyfirst=true' % (request['URL'], NODE, clustername, nodename))
 
 def deleteFenceDevice(self, request):
   errors = list()
@@ -2069,7 +2005,7 @@
         % clustername)
 
     if rc:
-      batch_id, result = setClusterConf(rc, str(conf_str))
+      batch_id, result = rq.setClusterConf(rc, str(conf_str))
       if batch_id is None or result is None:
         luci_log.debug_verbose('VFA: setCluserConf: batchid or result is None')
         errors.append('Unable to propagate the new cluster configuration for %s' \
@@ -2077,11 +2013,12 @@
       else:
         try:
           set_node_flag(self, clustername, rc.hostname(), batch_id,
-            CLUSTER_CONFIG, 'Removing fence device \"%s\"' % fencedev_name)
+            CLUSTER_CONFIG, 'Removing fence device "%s"' % fencedev_name)
         except:
           pass
 
-    response.redirect(request['URL'] + "?pagetype=" + FENCEDEVS + "&clustername=" + clustername + '&busyfirst=true')
+    response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (request['URL'], FENCEDEVS, clustername))
     return (True, {'errors': errors, 'messages': messages})
   else:
     errors.append(error_string)
@@ -2137,7 +2074,8 @@
 
 	if len(enable_list) < 1 and len(disable_list) < 1:
 		luci_log.debug_verbose('VDP4: no changes made')
-		response.redirect(request['URL'] + "?pagetype=" + NODE + "&clustername=" + clustername + '&nodename=' + nodename)
+		response.redirect('%s?pagetype=%s&clustername=%s&nodename=%s' \
+			% (request['URL'], NODE, clustername, nodename))
 
 	nodename_resolved = resolve_nodename(self, clustername, nodename)
 	try:
@@ -2149,18 +2087,19 @@
 		errors.append('Unable to connect to the ricci agent on %s to update cluster daemon properties' % nodename_resolved)
 		return (False, {'errors': errors})
 
-	batch_id, result = updateServices(rc, enable_list, disable_list)
+	batch_id, result = rq.updateServices(rc, enable_list, disable_list)
 	if batch_id is None or result is None:
 		luci_log.debug_verbose('VDP6: setCluserConf: batchid or result is None')
 		errors.append('Unable to update the cluster daemon properties on node %s' % nodename_resolved)
 		return (False, {'errors': errors})
 
 	try:
-		status_msg = 'Updating %s daemon properties:' % nodename_resolved
 		if len(enable_list) > 0:
-			status_msg += ' enabling %s' % str(enable_list)[1:-1]
+			status_msg = 'Updating node "%s" daemon properties: enabling "%s"' \
+				% (nodename_resolved, str(enable_list)[1:-1])
 		if len(disable_list) > 0:
-			status_msg += ' disabling %s' % str(disable_list)[1:-1]
+			status_msg = 'Updating node "%s" daemon properties: disabling "%s"' \
+				% (nodename_resolved, str(disable_list)[1:-1])
 		set_node_flag(self, clustername, rc.hostname(), batch_id, CLUSTER_DAEMON, status_msg)
 	except:
 		pass
@@ -2168,7 +2107,7 @@
 	if len(errors) > 0:
 		return (False, {'errors': errors})
 
-	response.redirect(request['URL'] + "?pagetype=" + NODE + "&clustername=" + clustername + '&nodename=' + nodename + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&nodename=%s&busyfirst=true' % (request['URL'], NODE, clustername, nodename))
 
 def validateFdom(self, request):
 	errors = list()
@@ -2227,14 +2166,14 @@
 
 	if oldname is None or oldname != name:
 		if model.getFailoverDomainByName(name) is not None:
-			errors.append('A failover domain named \"%s\" already exists.' % name)
+			errors.append('A failover domain named "%s" already exists.' % name)
 
 	fdom = None
 	if oldname is not None:
 		fdom = model.getFailoverDomainByName(oldname)
 		if fdom is None:
 			luci_log.debug_verbose('validateFdom1: No fdom named %s exists' % oldname)
-			errors.append('No failover domain named \"%s" exists.' % oldname)
+			errors.append('No failover domain named "%s" exists.' % oldname)
 		else:
 			fdom.addAttribute('name', name)
 			fdom.children = list()
@@ -2264,7 +2203,7 @@
 			if prioritized:
 				priority = 1
 				try:
-					priority = int(request.form['__PRIORITY__' + i].strip())
+					priority = int(request.form['__PRIORITY__%s' % i].strip())
 					if priority < 1:
 						priority = 1
 				except Exception, e:
@@ -2291,21 +2230,22 @@
 		return (False, {'errors': ['Unable to find a ricci agent for the %s cluster' % clustername ]})
 	ragent = rc.hostname()
 
-	batch_number, result = setClusterConf(rc, conf)
+	batch_number, result = rq.setClusterConf(rc, conf)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('validateFdom4: missing batch and/or result')
 		return (False, {'errors': [ 'An error occurred while constructing the new cluster configuration.' ]})
 
 	try:
 		if oldname:
-			set_node_flag(self, clustername, ragent, str(batch_number), FDOM, 'Updating failover domain \"%s\"' % oldname)
+			set_node_flag(self, clustername, ragent, str(batch_number), FDOM, 'Updating failover domain "%s"' % oldname)
 		else:
-			set_node_flag(self, clustername, ragent, str(batch_number), FDOM_ADD, 'Creating failover domain \"%s\"' % name)
+			set_node_flag(self, clustername, ragent, str(batch_number), FDOM_ADD, 'Creating failover domain "%s"' % name)
 	except Exception, e:
 		luci_log.debug_verbose('validateFdom5: failed to set flags: %s' % str(e))
 
 	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + FDOM + "&clustername=" + clustername + '&fdomname=' + name + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&fdomname=%s&busyfirst=true' \
+		% (request['URL'], FDOM, clustername, name))
 
 def validateVM(self, request):
 	errors = list()
@@ -2353,7 +2293,7 @@
 			recovery = None
 		else:
 			if recovery != 'restart' and recovery != 'relocate' and recovery != 'disable':
-				errors.append('You entered an invalid recovery option: \"%s\" Valid options are \"restart\" \"relocate\" and \"disable\"')
+				errors.append('You entered an invalid recovery option: "%s" Valid options are "restart" "relocate" and "disable"')
 	except:
 		recovery = None
 
@@ -2386,7 +2326,7 @@
 			rmptr.removeChild(xvm)
 			delete_vm = True
 		except:
-			return (False, {'errors': ['No virtual machine service named \"%s\" exists.' % old_name ]})
+			return (False, {'errors': ['No virtual machine service named "%s" exists.' % old_name ]})
 	else:
 		if isNew is True:
 			xvm = Vm()
@@ -2400,7 +2340,7 @@
 				if not xvm:
 					raise Exception, 'not found'
 			except:
-				return (False, {'errors': ['No virtual machine service named \"%s\" exists.' % old_name ]})
+				return (False, {'errors': ['No virtual machine service named "%s" exists.' % old_name ]})
 			xvm.addAttribute('name', vm_name)
 			xvm.addAttribute('path', vm_path)
 
@@ -2447,7 +2387,7 @@
 		luci_log.debug_verbose('validateVM4: no ricci for %s' % clustername)
 		return (False, {'errors': ['Unable to contact a ricci agent for this cluster.']})
 
-	batch_number, result = setClusterConf(rc, stringbuf)
+	batch_number, result = rq.setClusterConf(rc, stringbuf)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('validateVM5: missing batch and/or result')
 		return (False, {'errors': [ 'Error creating virtual machine %s.' % vm_name ]})
@@ -2463,7 +2403,8 @@
 		luci_log.debug_verbose('validateVM6: failed to set flags: %s' % str(e))
 
 	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + SERVICES + "&clustername=" + clustername + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (request['URL'], SERVICES, clustername))
 
 formValidators = {
 	6: validateCreateCluster,
@@ -2499,6 +2440,14 @@
 		return formValidators[pagetype](self, request)
 
 
+# Policy for showing the cluster chooser menu:
+# 1) If there are no clusters in the ManagedClusterSystems
+# folder, then only the admin user may see this menu, and
+# the configure option should not be displayed.
+# 2)If there are clusters in the ManagedClusterSystems,
+# then only display chooser if the current user has
+# permissions on@least one. If the user is admin, show ALL clusters
+
 def createCluChooser(self, request, systems):
   dummynode = {}
 
@@ -2514,8 +2463,8 @@
     except:
       pass
 
-  #First, see if a cluster is chosen, then
-  #check that the current user can access that system
+  # First, see if a cluster is chosen, then
+  # check that the current user can access that system
   cname = None
   try:
     cname = request[CLUNAME]
@@ -2532,11 +2481,10 @@
   except:
     pagetype = '3'
 
-
   cldata = {}
   cldata['Title'] = "Cluster List"
   cldata['cfg_type'] = "clusters"
-  cldata['absolute_url'] = url + "?pagetype=" + CLUSTERLIST
+  cldata['absolute_url'] = '%s?pagetype=%s' % (url, CLUSTERLIST)
   cldata['Description'] = "Clusters available for configuration"
   if pagetype == CLUSTERLIST:
     cldata['currentItem'] = True
@@ -2548,7 +2496,7 @@
     cladd = {}
     cladd['Title'] = "Create a New Cluster"
     cladd['cfg_type'] = "clusteradd"
-    cladd['absolute_url'] = url + "?pagetype=" + CLUSTER_ADD
+    cladd['absolute_url'] = '%s?pagetype=%s' % (url, CLUSTER_ADD)
     cladd['Description'] = "Create a Cluster"
     if pagetype == CLUSTER_ADD:
       cladd['currentItem'] = True
@@ -2558,7 +2506,7 @@
   clcfg = {}
   clcfg['Title'] = "Configure"
   clcfg['cfg_type'] = "clustercfg"
-  clcfg['absolute_url'] = url + "?pagetype=" + CLUSTERS
+  clcfg['absolute_url'] = '%s?pagetype=%s' % (url, CLUSTERS)
   clcfg['Description'] = "Configure a cluster"
   if pagetype == CLUSTERS:
     clcfg['currentItem'] = True
@@ -2579,7 +2527,7 @@
     clsys = {}
     clsys['Title'] = system[0]
     clsys['cfg_type'] = "cluster"
-    clsys['absolute_url'] = url + "?pagetype=" + CLUSTER + "&clustername=" + system[0]
+    clsys['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, CLUSTER, system[0])
     clsys['Description'] = "Configure this cluster"
 
     if pagetype == CLUSTER or pagetype == CLUSTER_CONFIG:
@@ -2615,7 +2563,7 @@
   if not model:
     return {}
 
-  #There should be a positive page type
+  # There should be a positive page type
   try:
     pagetype = request[PAGETYPE]
   except:
@@ -2626,14 +2574,14 @@
   except:
     url = "/luci/cluster/index_html"
 
-  #The only way this method can run is if there exists
-  #a clustername query var
+  # The only way this method can run is if there exists
+  # a clustername query var
   cluname = request['clustername']
 
   nd = {}
   nd['Title'] = "Nodes"
   nd['cfg_type'] = "nodes"
-  nd['absolute_url'] = url + "?pagetype=" + NODES + "&clustername=" + cluname
+  nd['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, NODES, cluname)
   nd['Description'] = "Node configuration for this cluster"
   if pagetype == NODES or pagetype == NODE_GRID or pagetype == NODE_LIST or pagetype == NODE_CONFIG or pagetype == NODE_ADD or pagetype == NODE:
     nd['show_children'] = True
@@ -2651,7 +2599,7 @@
   ndadd = {}
   ndadd['Title'] = "Add a Node"
   ndadd['cfg_type'] = "nodeadd"
-  ndadd['absolute_url'] = url + "?pagetype=" + NODE_ADD + "&clustername=" + cluname
+  ndadd['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, NODE_ADD, cluname)
   ndadd['Description'] = "Add a node to this cluster"
   if pagetype == NODE_ADD:
     ndadd['currentItem'] = True
@@ -2661,7 +2609,7 @@
   ndcfg = {}
   ndcfg['Title'] = "Configure"
   ndcfg['cfg_type'] = "nodecfg"
-  ndcfg['absolute_url'] = url + "?pagetype=" + NODE_CONFIG + "&clustername=" + cluname
+  ndcfg['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, NODE_CONFIG, cluname)
   ndcfg['Description'] = "Configure cluster nodes"
   if pagetype == NODE_CONFIG or pagetype == NODE or pagetype == NODES or pagetype == NODE_LIST or pagetype == NODE_GRID or pagetype == NODE_ADD:
     ndcfg['show_children'] = True
@@ -2682,7 +2630,7 @@
     cfg = {}
     cfg['Title'] = nodename
     cfg['cfg_type'] = "node"
-    cfg['absolute_url'] = url + "?pagetype=" + NODE + "&nodename=" + nodename + "&clustername=" + cluname
+    cfg['absolute_url'] = '%s?pagetype=%s&nodename=%s&clustername=%s' % (url, NODE, nodename, cluname)
     cfg['Description'] = "Configure this cluster node"
     if pagetype == NODE:
       try:
@@ -2711,7 +2659,7 @@
   sv = {}
   sv['Title'] = "Services"
   sv['cfg_type'] = "services"
-  sv['absolute_url'] = url + "?pagetype=" + SERVICES + "&clustername=" + cluname
+  sv['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, SERVICES, cluname)
   sv['Description'] = "Service configuration for this cluster"
   if pagetype == SERVICES or pagetype == SERVICE_CONFIG or pagetype == SERVICE_ADD or pagetype == SERVICE or pagetype == SERVICE_LIST or pagetype == VM_ADD or pagetype == VM_CONFIG:
     sv['show_children'] = True
@@ -2725,7 +2673,7 @@
   svadd = {}
   svadd['Title'] = "Add a Service"
   svadd['cfg_type'] = "serviceadd"
-  svadd['absolute_url'] = url + "?pagetype=" + SERVICE_ADD + "&clustername=" + cluname
+  svadd['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, SERVICE_ADD, cluname)
   svadd['Description'] = "Add a Service to this cluster"
   if pagetype == SERVICE_ADD:
     svadd['currentItem'] = True
@@ -2736,7 +2684,7 @@
     vmadd = {}
     vmadd['Title'] = "Add a Virtual Service"
     vmadd['cfg_type'] = "vmadd"
-    vmadd['absolute_url'] = url + "?pagetype=" + VM_ADD + "&clustername=" + cluname
+    vmadd['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, VM_ADD, cluname)
     vmadd['Description'] = "Add a Virtual Service to this cluster"
     if pagetype == VM_ADD:
       vmadd['currentItem'] = True
@@ -2746,7 +2694,7 @@
   svcfg = {}
   svcfg['Title'] = "Configure a Service"
   svcfg['cfg_type'] = "servicecfg"
-  svcfg['absolute_url'] = url + "?pagetype=" + SERVICE_CONFIG + "&clustername=" + cluname
+  svcfg['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, SERVICE_CONFIG, cluname)
   svcfg['Description'] = "Configure a Service for this cluster"
   if pagetype == SERVICE_CONFIG or pagetype == SERVICE or pagetype == VM_CONFIG:
     svcfg['show_children'] = True
@@ -2759,12 +2707,13 @@
 
   services = model.getServices()
   serviceable = list()
+
   for service in services:
     servicename = service.getName()
     svc = {}
     svc['Title'] = servicename
     svc['cfg_type'] = "service"
-    svc['absolute_url'] = url + "?pagetype=" + SERVICE + "&servicename=" + servicename + "&clustername=" + cluname
+    svc['absolute_url'] = '%s?pagetype=%s&servicename=%s&clustername=%s' % (url, SERVICE, servicename, cluname)
     svc['Description'] = "Configure this service"
     if pagetype == SERVICE:
       try:
@@ -2786,7 +2735,7 @@
     svc = {}
     svc['Title'] = name
     svc['cfg_type'] = "vm"
-    svc['absolute_url'] = url + "?pagetype=" + VM_CONFIG + "&servicename=" + name + "&clustername=" + cluname
+    svc['absolute_url'] = '%s?pagetype=%s&servicename=%s&clustername=%s' % (url, VM_CONFIG, name, cluname)
     svc['Description'] = "Configure this Virtual Service"
     if pagetype == VM_CONFIG:
       try:
@@ -2816,7 +2765,7 @@
   rv = {}
   rv['Title'] = "Resources"
   rv['cfg_type'] = "resources"
-  rv['absolute_url'] = url + "?pagetype=" + RESOURCES + "&clustername=" + cluname
+  rv['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, RESOURCES, cluname)
   rv['Description'] = "Resource configuration for this cluster"
   if pagetype == RESOURCES or pagetype == RESOURCE_CONFIG or pagetype == RESOURCE_ADD or pagetype == RESOURCE:
     rv['show_children'] = True
@@ -2830,7 +2779,7 @@
   rvadd = {}
   rvadd['Title'] = "Add a Resource"
   rvadd['cfg_type'] = "resourceadd"
-  rvadd['absolute_url'] = url + "?pagetype=" + RESOURCE_ADD + "&clustername=" + cluname
+  rvadd['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, RESOURCE_ADD, cluname)
   rvadd['Description'] = "Add a Resource to this cluster"
   if pagetype == RESOURCE_ADD:
     rvadd['currentItem'] = True
@@ -2840,7 +2789,7 @@
   rvcfg = {}
   rvcfg['Title'] = "Configure a Resource"
   rvcfg['cfg_type'] = "resourcecfg"
-  rvcfg['absolute_url'] = url + "?pagetype=" + RESOURCE_CONFIG + "&clustername=" + cluname
+  rvcfg['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, RESOURCE_CONFIG, cluname)
   rvcfg['Description'] = "Configure a Resource for this cluster"
   if pagetype == RESOURCE_CONFIG or pagetype == RESOURCE:
     rvcfg['show_children'] = True
@@ -2858,7 +2807,7 @@
     rvc = {}
     rvc['Title'] = resourcename
     rvc['cfg_type'] = "resource"
-    rvc['absolute_url'] = url + "?pagetype=" + RESOURCE + "&resourcename=" + resourcename + "&clustername=" + cluname
+    rvc['absolute_url'] = '%s?pagetype=%s&resourcename=%s&clustername=%s' % (url, RESOURCES, resourcename, cluname)
     rvc['Description'] = "Configure this resource"
     if pagetype == RESOURCE:
       try:
@@ -2885,7 +2834,7 @@
   fd = {}
   fd['Title'] = "Failover Domains"
   fd['cfg_type'] = "failoverdomains"
-  fd['absolute_url'] = url + "?pagetype=" + FDOMS + "&clustername=" + cluname
+  fd['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, FDOMS, cluname)
   fd['Description'] = "Failover domain configuration for this cluster"
   if pagetype == FDOMS or pagetype == FDOM_CONFIG or pagetype == FDOM_ADD or pagetype == FDOM:
     fd['show_children'] = True
@@ -2899,7 +2848,7 @@
   fdadd = {}
   fdadd['Title'] = "Add a Failover Domain"
   fdadd['cfg_type'] = "failoverdomainadd"
-  fdadd['absolute_url'] = url + "?pagetype=" + FDOM_ADD + "&clustername=" + cluname
+  fdadd['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, FDOM_ADD, cluname)
   fdadd['Description'] = "Add a Failover Domain to this cluster"
   if pagetype == FDOM_ADD:
     fdadd['currentItem'] = True
@@ -2909,7 +2858,7 @@
   fdcfg = {}
   fdcfg['Title'] = "Configure a Failover Domain"
   fdcfg['cfg_type'] = "failoverdomaincfg"
-  fdcfg['absolute_url'] = url + "?pagetype=" + FDOM_CONFIG + "&clustername=" + cluname
+  fdcfg['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, FDOM_CONFIG, cluname)
   fdcfg['Description'] = "Configure a Failover Domain for this cluster"
   if pagetype == FDOM_CONFIG or pagetype == FDOM:
     fdcfg['show_children'] = True
@@ -2927,7 +2876,7 @@
     fdc = {}
     fdc['Title'] = fdomname
     fdc['cfg_type'] = "fdom"
-    fdc['absolute_url'] = url + "?pagetype=" + FDOM + "&fdomname=" + fdomname + "&clustername=" + cluname
+    fdc['absolute_url'] = '%s?pagetype=%s&fdomname=%s&clustername=%s' % (url, FDOM, fdomname, cluname)
     fdc['Description'] = "Configure this Failover Domain"
     if pagetype == FDOM:
       try:
@@ -2954,7 +2903,7 @@
   fen = {}
   fen['Title'] = "Shared Fence Devices"
   fen['cfg_type'] = "fencedevicess"
-  fen['absolute_url'] = url + "?pagetype=" + FENCEDEVS + "&clustername=" + cluname
+  fen['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, FENCEDEVS, cluname)
   fen['Description'] = "Fence Device configuration for this cluster"
   if pagetype == FENCEDEVS or pagetype == FENCEDEV_CONFIG or pagetype == FENCEDEV_ADD or pagetype == FENCEDEV:
     fen['show_children'] = True
@@ -2968,7 +2917,7 @@
   fenadd = {}
   fenadd['Title'] = "Add a Fence Device"
   fenadd['cfg_type'] = "fencedeviceadd"
-  fenadd['absolute_url'] = url + "?pagetype=" + FENCEDEV_ADD + "&clustername=" + cluname
+  fenadd['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, FENCEDEV_ADD, cluname)
   fenadd['Description'] = "Add a Fence Device to this cluster"
   if pagetype == FENCEDEV_ADD:
     fenadd['currentItem'] = True
@@ -2978,7 +2927,7 @@
   fencfg = {}
   fencfg['Title'] = "Configure a Fence Device"
   fencfg['cfg_type'] = "fencedevicecfg"
-  fencfg['absolute_url'] = url + "?pagetype=" + FENCEDEV_CONFIG + "&clustername=" + cluname
+  fencfg['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, FENCEDEV_CONFIG, cluname)
   fencfg['Description'] = "Configure a Fence Device for this cluster"
   if pagetype == FENCEDEV_CONFIG or pagetype == FENCEDEV:
     fencfg['show_children'] = True
@@ -2996,7 +2945,7 @@
     fenc = {}
     fenc['Title'] = fencename
     fenc['cfg_type'] = "fencedevice"
-    fenc['absolute_url'] = url + "?pagetype=" + FENCEDEV + "&fencename=" + fencename + "&clustername=" + cluname
+    fenc['absolute_url'] = '%s?pagetype=%s&fencename=%s&clustername=%s' % (url, FENCEDEV, fencename, cluname)
     fenc['Description'] = "Configure this Fence Device"
     if pagetype == FENCEDEV:
       try:
@@ -3032,18 +2981,16 @@
 
   return dummynode
 
-
 def getClusterName(self, model):
-  return model.getClusterName()
+	return model.getClusterName()
 
 def getClusterAlias(self, model):
-  if not model:
-    return ''
-  alias = model.getClusterAlias()
-  if alias is None:
-    return model.getClusterName()
-  else:
-    return alias
+	if not model:
+		return ''
+	alias = model.getClusterAlias()
+	if alias is None:
+		return model.getClusterName()
+	return alias
 
 def getClusterURL(self, request, model):
 	try:
@@ -3110,18 +3057,17 @@
 
   return portaltabs
 
-
-
 def check_clusters(self, clusters):
-  clist = list()
-  for cluster in clusters:
-    if cluster_permission_check(cluster[1]):
-      clist.append(cluster)
+	sm = AccessControl.getSecurityManager()
+	user = sm.getUser()
 
-  return clist
+	clist = list()
+	for cluster in clusters:
+		if user.has_permission('View', cluster):
+			clist.append(cluster)
+	return clist
 
 def cluster_permission_check(cluster):
-	#Does this take too long?
 	try:
 		sm = AccessControl.getSecurityManager()
 		user = sm.getUser()
@@ -3133,7 +3079,7 @@
 
 def getRicciAgent(self, clustername):
 	#Check cluster permission here! return none if false
-	path = str(CLUSTER_FOLDER_PATH + clustername)
+	path = '%s%s' % (CLUSTER_FOLDER_PATH, clustername)
 
 	try:
 		clusterfolder = self.restrictedTraverse(path)
@@ -3315,7 +3261,7 @@
 	results.append(vals)
 
 	try:
-		cluster_path = CLUSTER_FOLDER_PATH + clustername
+		cluster_path = '%s%s' % (CLUSTER_FOLDER_PATH, clustername)
 		nodelist = self.restrictedTraverse(cluster_path).objectItems('Folder')
 	except Exception, e:
 		luci_log.debug_verbose('GCSDB0: %s -> %s: %s' \
@@ -3346,7 +3292,7 @@
 
 def getClusterStatus(self, request, rc, cluname=None):
 	try:
-		doc = getClusterStatusBatch(rc)
+		doc = rq.getClusterStatusBatch(rc)
 		if not doc:
 			raise Exception, 'doc is None'
 	except Exception, e:
@@ -3428,7 +3374,7 @@
 	return results
 
 def getServicesInfo(self, status, model, req):
-	map = {}
+	svc_map = {}
 	maplist = list()
 
 	try:
@@ -3461,39 +3407,39 @@
 				cur_node = item['nodename']
 				itemmap['running'] = "true"
 				itemmap['nodename'] = cur_node
-				itemmap['disableurl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + item['name'] + "&pagetype=" + SERVICE_STOP
-				itemmap['restarturl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + item['name'] + "&pagetype=" + SERVICE_RESTART
+				itemmap['disableurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, item['name'], SERVICE_STOP)
+				itemmap['restarturl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, item['name'], SERVICE_RESTART)
 			else:
-				itemmap['enableurl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + item['name'] + "&pagetype=" + SERVICE_START
+				itemmap['enableurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, item['name'], SERVICE_START)
 
 			itemmap['autostart'] = item['autostart']
 
 			try:
 				svc = model.retrieveServiceByName(item['name'])
-				itemmap['cfgurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + SERVICE
-				itemmap['delurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + SERVICE_DELETE
+				itemmap['cfgurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, item['name'], SERVICE)
+				itemmap['cfgurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, item['name'], SERVICE_DELETE)
 			except:
 				try:
 					svc = model.retrieveVMsByName(item['name'])
 					itemmap['is_vm'] = True
-					itemmap['cfgurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + VM_CONFIG 
-					itemmap['delurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + VM_CONFIG
+					itemmap['cfgurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, item['name'], VM_CONFIG)
+					itemmap['delurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, item['name'], VM_CONFIG)
 				except:
 					continue
 
 			starturls = list()
 			for node in nodes:
+				cur_nodename = node.getName()
 				if node.getName() != cur_node:
 					starturl = {}
-					cur_nodename = node.getName()
 					starturl['nodename'] = cur_nodename
-					starturl['url'] = baseurl + '?' + 'clustername=' + cluname +'&servicename=' + item['name'] + '&pagetype=' + SERVICE_START + '&nodename=' + node.getName()
+					starturl['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, item['name'], SERVICE_START, cur_nodename)
 					starturls.append(starturl)
 
 					if itemmap.has_key('is_vm') and itemmap['is_vm'] is True:
 						migrate_url = { 'nodename': cur_nodename }
 						migrate_url['migrate'] = True
-						migrate_url['url'] = baseurl + '?' + 'clustername=' + cluname +'&servicename=' + item['name'] + '&pagetype=' + SERVICE_MIGRATE + '&nodename=' + node.getName()
+						migrate_url['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, item['name'], SERVICE_MIGRATE, cur_nodename)
 						starturls.append(migrate_url)
 
 			itemmap['links'] = starturls
@@ -3505,13 +3451,14 @@
 				itemmap['faildom'] = "No Failover Domain"
 			maplist.append(itemmap)
 
-	map['services'] = maplist
-	return map
+	svc_map['services'] = maplist
+	return svc_map
 
 def get_fdom_names(model):
 	return map(lambda x: x.getName(), model.getFailoverDomains())
 
 def getServiceInfo(self, status, model, req):
+	from Products.Archetypes.utils import make_uuid
 	#set up struct for service config page
 	hmap = {}
 	root_uuid = 'toplevel'
@@ -3561,11 +3508,11 @@
 				if item['running'] == 'true':
 					hmap['running'] = 'true'
 					nodename = item['nodename']
-					innermap['current'] = 'This service is currently running on %s' % nodename
+					innermap['current'] = 'Running on %s' % nodename
 
-					innermap['disableurl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_STOP
-					innermap['restarturl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_RESTART
-					innermap['delurl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_DELETE
+					innermap['disableurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, servicename, SERVICE_STOP)
+					innermap['restarturl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, servicename, SERVICE_RESTART)
+					innermap['delurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, servicename, SERVICE_DELETE)
 
 					#In this case, determine where it can run...
 					nodes = model.getNodes()
@@ -3574,20 +3521,20 @@
 							starturl = {}
 							cur_nodename = node.getName()
 							starturl['nodename'] = cur_nodename
-							starturl['url'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_START + "&nodename=" + node.getName()
+							starturl['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, servicename, SERVICE_START, cur_nodename)
 							starturls.append(starturl)
 
 							if item.has_key('is_vm') and item['is_vm'] is True:
 								migrate_url = { 'nodename': cur_nodename }
-								migrate_url['url'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_MIGRATE + "&nodename=" + node.getName()
+								migrate_url['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, servicename, SERVICE_MIGRATE, cur_nodename)
 								migrate_url['migrate'] = True
 								starturls.append(migrate_url)
 					innermap['links'] = starturls
 				else:
 					#Do not set ['running'] in this case...ZPT will detect it is missing
-					innermap['current'] = "This service is currently stopped"
-					innermap['enableurl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_START
-					innermap['delurl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_DELETE
+					innermap['current'] = "Stopped"
+					innermap['enableurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, servicename, SERVICE_START)
+					innermap['delurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, servicename, SERVICE_DELETE)
 
 					nodes = model.getNodes()
 					starturls = list()
@@ -3596,12 +3543,12 @@
 						cur_nodename = node.getName()
 
 						starturl['nodename'] = cur_nodename
-						starturl['url'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_START + "&nodename=" + node.getName()
+						starturl['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, servicename, SERVICE_START, cur_nodename)
 						starturls.append(starturl)
 
 						if item.has_key('is_vm') and item['is_vm'] is True:
 							migrate_url = { 'nodename': cur_nodename }
-							migrate_url['url'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_MIGRATE + "&nodename=" + node.getName()
+							migrate_url['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, servicename, SERVICE_MIGRATE, cur_nodename)
 							migrate_url['migrate'] = True
 							starturls.append(migrate_url)
 					innermap['links'] = starturls
@@ -3712,22 +3659,25 @@
 			% svcname)
 		return None
 
-	batch_number, result = startService(rc, svcname, nodename)
+	batch_number, result = rq.startService(rc, svcname, nodename)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('serviceStart3: SS(%s,%s,%s) call failed' \
 			% (svcname, cluname, nodename))
 		return None
 
 	try:
-		status_msg = "Starting service \'%s\'" % svcname
 		if nodename:
-			status_msg += " on node \'%s\'" % nodename
+			status_msg = 'Starting service "%s" on node "%s"' \
+				% (svcname, nodename)
+		else:
+			status_msg = 'Starting service "%s"' % svcname
 		set_node_flag(self, cluname, rc.hostname(), str(batch_number), SERVICE_START, status_msg)
 	except Exception, e:
 		luci_log.debug_verbose('serviceStart4: error setting flags for service %s at node %s for cluster %s' % (svcname, nodename, cluname))
 
 	response = req.RESPONSE
-	response.redirect(req['URL'] + "?pagetype=" + SERVICE_LIST + "&clustername=" + cluname + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (req['URL'], SERVICE_LIST, cluname))
 
 def serviceMigrate(self, rc, req):
 	svcname = None
@@ -3770,7 +3720,7 @@
 			% svcname)
 		return None
 
-	batch_number, result = migrateService(rc, svcname, nodename)
+	batch_number, result = rq.migrateService(rc, svcname, nodename)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('serviceMigrate3: SS(%s,%s,%s) call failed' \
 			% (svcname, cluname, nodename))
@@ -3782,7 +3732,8 @@
 		luci_log.debug_verbose('serviceMigrate4: error setting flags for service %s at node %s for cluster %s' % (svcname, nodename, cluname))
 
 	response = req.RESPONSE
-	response.redirect(req['URL'] + "?pagetype=" + SERVICE_LIST + "&clustername=" + cluname + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (req['URL'], SERVICE_LIST, cluname))
 
 def serviceRestart(self, rc, req):
 	svcname = None
@@ -3811,7 +3762,7 @@
 		luci_log.debug_verbose('serviceRestart1: no cluster for %s' % svcname)
 		return None
 
-	batch_number, result = restartService(rc, svcname)
+	batch_number, result = rq.restartService(rc, svcname)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('serviceRestart2: %s failed' % svcname)
 		return None
@@ -3822,7 +3773,8 @@
 		luci_log.debug_verbose('serviceRestart3: error setting flags for service %s for cluster %s' % (svcname, cluname))
 
 	response = req.RESPONSE
-	response.redirect(req['URL'] + "?pagetype=" + SERVICE_LIST + "&clustername=" + cluname + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (req['URL'], SERVICE_LIST, cluname))
 
 def serviceStop(self, rc, req):
 	svcname = None
@@ -3851,7 +3803,7 @@
 		luci_log.debug_verbose('serviceStop1: no cluster name for %s' % svcname)
 		return None
 
-	batch_number, result = stopService(rc, svcname)
+	batch_number, result = rq.stopService(rc, svcname)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('serviceStop2: stop %s failed' % svcname)
 		return None
@@ -3862,7 +3814,8 @@
 		luci_log.debug_verbose('serviceStop3: error setting flags for service %s for cluster %s' % (svcname, cluname))
 
 	response = req.RESPONSE
-	response.redirect(req['URL'] + "?pagetype=" + SERVICE_LIST + "&clustername=" + cluname + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (req['URL'], SERVICE_LIST, cluname))
 
 def getFdomInfo(self, model, request):
 	fhash = {}
@@ -3913,7 +3866,8 @@
   for fdom in fdoms:
     fdom_map = {}
     fdom_map['name'] = fdom.getName()
-    fdom_map['cfgurl'] = baseurl + "?pagetype=" + FDOM + "&clustername=" + clustername + '&fdomname=' + fdom.getName()
+    fdom_map['cfgurl'] = '%s?pagetype=%s&clustername=%s&fdomname=%s' \
+		% (baseurl, FDOM, clustername, fdom.getName())
     ordered_attr = fdom.getAttribute('ordered')
     restricted_attr = fdom.getAttribute('restricted')
     if ordered_attr is not None and (ordered_attr == "true" or ordered_attr == "1"):
@@ -3933,7 +3887,8 @@
         if nitem['name'] == ndname:
           break
       nodesmap['nodename'] = ndname
-      nodesmap['nodecfgurl'] = baseurl + "?clustername=" + clustername + "&nodename=" + ndname + "&pagetype=" + NODE
+      nodesmap['nodecfgurl'] = '%s?clustername=%s&nodename=%s&pagetype=%s' \
+		% (baseurl, clustername, ndname, NODE)
       if nitem['clustered'] == "true":
         nodesmap['status'] = NODE_ACTIVE
       elif nitem['online'] == "false":
@@ -3959,7 +3914,8 @@
           svcmap = {}
           svcmap['name'] = svcname
           svcmap['status'] = sitem['running']
-          svcmap['svcurl'] = baseurl + "?pagetype=" + SERVICE + "&clustername=" + clustername + "&servicename=" + svcname
+          svcmap['svcurl'] = '%s?pagetype=%s&clustername=%s&servicename=%s' \
+			% (baseurl, SERVICE, clustername, svcname)
           svcmap['location'] = sitem['nodename']
           svclist.append(svcmap)
     fdom_map['svclist'] = svclist
@@ -4044,8 +4000,9 @@
     if totem:
       clumap['totem'] = totem.getAttributes()
 
-  prop_baseurl = req['URL'] + '?' + PAGETYPE + '=' + CLUSTER_CONFIG + '&' + CLUNAME + '=' + cluname + '&'
-  basecluster_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_GENERAL_TAB
+  prop_baseurl = '%s?pagetype=%s&clustername=%s&' \
+	% (req['URL'], CLUSTER_CONFIG, cluname)
+  basecluster_url = '%stab=%s' % (prop_baseurl, PROP_GENERAL_TAB)
   #needed:
   clumap['basecluster_url'] = basecluster_url
   #name field
@@ -4061,7 +4018,7 @@
   gulm_ptr = model.getGULMPtr()
   if not gulm_ptr:
     #Fence Daemon Props
-    fencedaemon_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_FENCE_TAB
+    fencedaemon_url = '%stab=%s' % (prop_baseurl, PROP_FENCE_TAB)
     clumap['fencedaemon_url'] = fencedaemon_url
     fdp = model.getFenceDaemonPtr()
     pjd = fdp.getAttribute('post_join_delay')
@@ -4077,7 +4034,7 @@
 
     #-------------
     #if multicast
-    multicast_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_MCAST_TAB
+    multicast_url = '%stab=%s' % (prop_baseurl, PROP_MCAST_TAB)
     clumap['multicast_url'] = multicast_url
     #mcast addr
     is_mcast = model.isMulticast()
@@ -4100,12 +4057,12 @@
       if not n in gulm_lockservs:
         lockserv_list.append((n, False))
     clumap['gulm'] = True
-    clumap['gulm_url'] = prop_baseurl + PROPERTIES_TAB + '=' + PROP_GULM_TAB
+    clumap['gulm_url'] = '%stab=%s' % (prop_baseurl, PROP_GULM_TAB)
     clumap['gulm_lockservers'] = lockserv_list
 
   #-------------
   #quorum disk params
-  quorumd_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_QDISK_TAB
+  quorumd_url = '%stab=%s' % (prop_baseurl, PROP_QDISK_TAB)
   clumap['quorumd_url'] = quorumd_url
   is_quorumd = model.isQuorumd()
   clumap['is_quorumd'] = is_quorumd
@@ -4171,7 +4128,7 @@
   return clumap
 
 def getClustersInfo(self, status, req):
-  map = {}
+  clu_map = {}
   nodelist = list()
   svclist = list()
   clulist = list()
@@ -4190,28 +4147,33 @@
     return {}
   clu = clulist[0]
   if 'error' in clu:
-    map['error'] = True
+    clu_map['error'] = True
   clustername = clu['name']
   if clu['alias'] != "":
-    map['clusteralias'] = clu['alias']
+    clu_map['clusteralias'] = clu['alias']
   else:
-    map['clusteralias'] = clustername
-  map['clustername'] = clustername
+    clu_map['clusteralias'] = clustername
+  clu_map['clustername'] = clustername
   if clu['quorate'] == "true":
-    map['status'] = "Quorate"
-    map['running'] = "true"
+    clu_map['status'] = "Quorate"
+    clu_map['running'] = "true"
   else:
-    map['status'] = "Not Quorate"
-    map['running'] = "false"
-  map['votes'] = clu['votes']
-  map['minquorum'] = clu['minQuorum']
-
-  map['clucfg'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_CONFIG + "&" + CLUNAME + "=" + clustername
-
-  map['restart_url'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_PROCESS + "&" + CLUNAME + "=" + clustername + '&task=' + CLUSTER_RESTART
-  map['stop_url'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_PROCESS + "&" + CLUNAME + "=" + clustername + '&task=' + CLUSTER_STOP
-  map['start_url'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_PROCESS + "&" + CLUNAME + "=" + clustername + '&task=' + CLUSTER_START
-  map['delete_url'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_PROCESS + "&" + CLUNAME + "=" + clustername + '&task=' + CLUSTER_DELETE
+    clu_map['status'] = "Not Quorate"
+    clu_map['running'] = "false"
+  clu_map['votes'] = clu['votes']
+  clu_map['minquorum'] = clu['minQuorum']
+
+  clu_map['clucfg'] = '%s?pagetype=%s&clustername=%s' \
+	% (baseurl, CLUSTER_CONFIG, clustername)
+
+  clu_map['restart_url'] = '%s?pagetype=%s&clustername=%s&task=%s' \
+	% (baseurl, CLUSTER_PROCESS, clustername, CLUSTER_RESTART)
+  clu_map['stop_url'] = '%s?pagetype=%s&clustername=%s&task=%s' \
+	% (baseurl, CLUSTER_PROCESS, clustername, CLUSTER_STOP)
+  clu_map['start_url'] = '%s?pagetype=%s&clustername=%s&task=%s' \
+	% (baseurl, CLUSTER_PROCESS, clustername, CLUSTER_START)
+  clu_map['delete_url'] = '%s?pagetype=%s&clustername=%s&task=%s' \
+	% (baseurl, CLUSTER_PROCESS, clustername, CLUSTER_DELETE)
 
   svc_dict_list = list()
   for svc in svclist:
@@ -4220,23 +4182,26 @@
       svcname = svc['name']
       svc_dict['name'] = svcname
       svc_dict['srunning'] = svc['running']
+      svc_dict['servicename'] = svcname
 
       if svc.has_key('is_vm') and svc['is_vm'] is True:
         target_page = VM_CONFIG
       else:
         target_page = SERVICE
-      svcurl = baseurl + "?" + PAGETYPE + "=" + target_page + "&" + CLUNAME + "=" + clustername + "&servicename=" + svcname
-      svc_dict['servicename'] = svcname
+
+      svcurl = '%s?pagetype=%s&clustername=%s&servicename=%s' \
+		% (baseurl, target_page, clustername, svcname)
       svc_dict['svcurl'] = svcurl
       svc_dict_list.append(svc_dict)
-  map['currentservices'] = svc_dict_list
+  clu_map['currentservices'] = svc_dict_list
   node_dict_list = list()
 
   for item in nodelist:
     nmap = {}
     name = item['name']
     nmap['nodename'] = name
-    cfgurl = baseurl + "?" + PAGETYPE + "=" + NODE + "&" + CLUNAME + "=" + clustername + "&nodename=" + name
+    cfgurl = '%s?pagetype=%s&clustername=%s&nodename=%s' \
+		% (baseurl, NODE, clustername, name)
     nmap['configurl'] = cfgurl
     if item['clustered'] == "true":
       nmap['status'] = NODE_ACTIVE
@@ -4246,11 +4211,11 @@
       nmap['status'] = NODE_INACTIVE
     node_dict_list.append(nmap)
 
-  map['currentnodes'] = node_dict_list
-  return map
+  clu_map['currentnodes'] = node_dict_list
+  return clu_map
 
 def nodeLeave(self, rc, clustername, nodename_resolved):
-	path = str(CLUSTER_FOLDER_PATH + clustername + '/' + nodename_resolved)
+	path = '%s%s/%s' % (CLUSTER_FOLDER_PATH, clustername, nodename_resolved)
 
 	try:
 		nodefolder = self.restrictedTraverse(path)
@@ -4260,7 +4225,7 @@
 		luci_log.debug('NLO: node_leave_cluster err: %s' % str(e))
 		return None
 
-	objname = str(nodename_resolved + "____flag")
+	objname = '%s____flag' % nodename_resolved
 	fnpresent = noNodeFlagsPresent(self, nodefolder, objname, nodename_resolved)
 
 	if fnpresent is None:
@@ -4273,25 +4238,25 @@
 			% nodename_resolved)
 		return None
 
-	batch_number, result = nodeLeaveCluster(rc)
+	batch_number, result = rq.nodeLeaveCluster(rc)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('NL3: nodeLeaveCluster error: batch_number and/or result is None')
 		return None
 
 	try:
-		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_LEAVE_CLUSTER, "Node \'%s\' leaving cluster" % nodename_resolved)
+		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_LEAVE_CLUSTER, 'Node "%s" leaving cluster "%s"' % (nodename_resolved, clustername))
 	except Exception, e:
 		luci_log.debug_verbose('NL4: failed to set flags: %s' % str(e))
 	return True
 
 def nodeJoin(self, rc, clustername, nodename_resolved):
-	batch_number, result = nodeJoinCluster(rc)
+	batch_number, result = rq.nodeJoinCluster(rc)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('NJ0: batch_number and/or result is None')
 		return None
 
 	try:
-		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_JOIN_CLUSTER, "Node \'%s\' joining cluster" % nodename_resolved)
+		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_JOIN_CLUSTER, 'Node "%s" joining cluster "%s"' % (nodename_resolved, clustername))
 	except Exception, e:
 		luci_log.debug_verbose('NJ1: failed to set flags: %s' % str(e))
 	return True
@@ -4362,10 +4327,16 @@
 		luci_log.debug_verbose('cluRestart0: clusterStop: %d errs' % snum_err)
 	jnum_err = clusterStart(self, model)
 	if jnum_err:
-		luci_log.debug_verbose('cluRestart0: clusterStart: %d errs' % jnum_err)
+		luci_log.debug_verbose('cluRestart1: clusterStart: %d errs' % jnum_err)
 	return snum_err + jnum_err
 
 def clusterDelete(self, model):
+	# Try to stop all the cluster nodes before deleting any.
+	num_errors = clusterStop(self, model, delete=False)
+	if num_errors > 0:
+		return None
+
+	# If the cluster is stopped, delete all of the nodes.
 	num_errors = clusterStop(self, model, delete=True)
 	try:
 		clustername = model.getClusterName()
@@ -4381,7 +4352,7 @@
 				% (clustername, str(e)))
 
 		try:
-			clusterfolder = self.restrictedTraverse(str(CLUSTER_FOLDER_PATH + clustername))
+			clusterfolder = self.restrictedTraverse('%s%s' % (CLUSTER_FOLDER_PATH, clustername))
 			if len(clusterfolder.objectItems()) < 1:
 				clusters = self.restrictedTraverse(str(CLUSTER_FOLDER_PATH))
 				clusters.manage_delObjects([clustername])
@@ -4394,19 +4365,19 @@
 			% (clustername, num_errors))
 
 def forceNodeReboot(self, rc, clustername, nodename_resolved):
-	batch_number, result = nodeReboot(rc)
+	batch_number, result = rq.nodeReboot(rc)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('FNR0: batch_number and/or result is None')
 		return None
 
 	try:
-		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_REBOOT, "Node \'%s\' is being rebooted" % nodename_resolved)
+		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_REBOOT, 'Node "%s" is being rebooted' % nodename_resolved)
 	except Exception, e:
 		luci_log.debug_verbose('FNR1: failed to set flags: %s' % str(e))
 	return True
 
 def forceNodeFence(self, clustername, nodename, nodename_resolved):
-	path = str(CLUSTER_FOLDER_PATH + clustername)
+	path = '%s%s' % (CLUSTER_FOLDER_PATH, clustername)
 
 	try:
 		clusterfolder = self.restrictedTraverse(path)
@@ -4460,13 +4431,13 @@
 	if not found_one:
 		return None
 
-	batch_number, result = nodeFence(rc, nodename)
+	batch_number, result = rq.nodeFence(rc, nodename)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('FNF3: batch_number and/or result is None')
 		return None
 
 	try:
-		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_FENCE, "Node \'%s\' is being fenced" % nodename_resolved)
+		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_FENCE, 'Node "%s" is being fenced' % nodename_resolved)
 	except Exception, e:
 		luci_log.debug_verbose('FNF4: failed to set flags: %s' % str(e))
 	return True
@@ -4481,7 +4452,7 @@
 		# Make sure we can find a second node before we hose anything.
 		found_one = False
 
-		path = str(CLUSTER_FOLDER_PATH + clustername)
+		path = '%s%s' % (CLUSTER_FOLDER_PATH, clustername)
 
 		try:
 			clusterfolder = self.restrictedTraverse(path)
@@ -4540,7 +4511,7 @@
 
 	# First, delete cluster.conf from node to be deleted.
 	# next, have node leave cluster.
-	batch_number, result = nodeLeaveCluster(rc, purge=True)
+	batch_number, result = rq.nodeLeaveCluster(rc, purge=True)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('ND5: batch_number and/or result is None')
 		return None
@@ -4552,8 +4523,7 @@
 
 	if delete_cluster:
 		try:
-			set_node_flag(self, clustername, rc.hostname(), str(batch_number), CLUSTER_DELETE, "Deleting cluster \"%s\": Deleting node \'%s\'" \
-				% (clustername, nodename_resolved))
+			set_node_flag(self, clustername, rc.hostname(), str(batch_number), CLUSTER_DELETE, 'Deleting cluster "%s": Deleting node "%s"' % (clustername, nodename_resolved))
 		except Exception, e:
 			luci_log.debug_verbose('ND5a: failed to set flags: %s' % str(e))
 	else:
@@ -4589,13 +4559,13 @@
 			return None
 
 		# propagate the new cluster.conf via the second node
-		batch_number, result = setClusterConf(rc2, str(str_buf))
+		batch_number, result = rq.setClusterConf(rc2, str(str_buf))
 		if batch_number is None:
 			luci_log.debug_verbose('ND8: batch number is None after del node in NTP')
 			return None
 
 	# Now we need to delete the node from the DB
-	path = str(CLUSTER_FOLDER_PATH + clustername)
+	path = '%s%s' % (CLUSTER_FOLDER_PATH, clustername)
 	try:
 		clusterfolder = self.restrictedTraverse(path)
 		clusterfolder.manage_delObjects([nodename_resolved])
@@ -4663,12 +4633,12 @@
 		if not cluinfo[0] and not cluinfo[1]:
 			luci_log.debug('NTP5: node %s not in a cluster (expected %s)' \
 				% (nodename_resolved, clustername))
-			return (False, {'errors': [ 'Node %s reports it is not in a cluster.' % nodename_resolved ]})
+			return (False, {'errors': [ 'Node "%s" reports it is not in a cluster.' % nodename_resolved ]})
 
 		cname = clustername.lower()
 		if cname != cluinfo[0].lower() and cname != cluinfo[1].lower():
 			luci_log.debug('NTP6: node %s in unknown cluster %s:%s (expected %s)' % (nodename_resolved, cluinfo[0], cluinfo[1], clustername))
-			return (False, {'errors': [ 'Node %s reports it in cluster \"%s\". We expect it to be a member of cluster \"%s\"' % (nodename_resolved, cluinfo[0], clustername) ]})
+			return (False, {'errors': [ 'Node "%s" reports it in cluster "%s." We expect it to be a member of cluster "%s"' % (nodename_resolved, cluinfo[0], clustername) ]})
 
 		if not rc.authed():
 			rc = None
@@ -4689,45 +4659,50 @@
 		if rc is None:
 			luci_log.debug('NTP7: node %s is not authenticated' \
 				% nodename_resolved)
-			return (False, {'errors': [ 'Node %s is not authenticated' % nodename_resolved ]})
+			return (False, {'errors': [ 'Node "%s" is not authenticated.' % nodename_resolved ]})
 
 	if task == NODE_LEAVE_CLUSTER:
 		if nodeLeave(self, rc, clustername, nodename_resolved) is None:
 			luci_log.debug_verbose('NTP8: nodeLeave failed')
-			return (False, {'errors': [ 'Node %s failed to leave cluster %s' % (nodename_resolved, clustername) ]})
+			return (False, {'errors': [ 'Node "%s" failed to leave cluster "%s"' % (nodename_resolved, clustername) ]})
 
 		response = request.RESPONSE
-		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
+		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+			% (request['URL'], NODES, clustername))
 	elif task == NODE_JOIN_CLUSTER:
 		if nodeJoin(self, rc, clustername, nodename_resolved) is None:
 			luci_log.debug_verbose('NTP9: nodeJoin failed')
-			return (False, {'errors': [ 'Node %s failed to join cluster %s' % (nodename_resolved, clustername) ]})
+			return (False, {'errors': [ 'Node "%s" failed to join cluster "%s"' % (nodename_resolved, clustername) ]})
 
 		response = request.RESPONSE
-		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
+		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+			% (request['URL'], NODES, clustername))
 	elif task == NODE_REBOOT:
 		if forceNodeReboot(self, rc, clustername, nodename_resolved) is None:
 			luci_log.debug_verbose('NTP10: nodeReboot failed')
-			return (False, {'errors': [ 'Node %s failed to reboot' \
+			return (False, {'errors': [ 'Node "%s" failed to reboot.' \
 				% nodename_resolved ]})
 
 		response = request.RESPONSE
-		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
+		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+			% (request['URL'], NODES, clustername))
 	elif task == NODE_FENCE:
 		if forceNodeFence(self, clustername, nodename, nodename_resolved) is None:
 			luci_log.debug_verbose('NTP11: nodeFencefailed')
-			return (False, {'errors': [ 'Fencing of node %s failed.' \
+			return (False, {'errors': [ 'Fencing of node "%s" failed.' \
 				% nodename_resolved]})
 
 		response = request.RESPONSE
-		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
+		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+			% (request['URL'], NODES, clustername))
 	elif task == NODE_DELETE:
 		if nodeDelete(self, rc, model, clustername, nodename, nodename_resolved) is None:
 			luci_log.debug_verbose('NTP12: nodeDelete failed')
-			return (False, {'errors': [ 'Deletion of node %s from cluster %s failed.' % (nodename_resolved, clustername) ]})
+			return (False, {'errors': [ 'Deletion of node "%s" from cluster "%s" failed.' % (nodename_resolved, clustername) ]})
 
 		response = request.RESPONSE
-		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
+		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+			% (request['URL'], NODES, clustername))
 
 def getNodeInfo(self, model, status, request):
   infohash = {}
@@ -4770,17 +4745,26 @@
 
   #set up drop down links
   if nodestate == NODE_ACTIVE:
-    infohash['jl_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_LEAVE_CLUSTER + "&nodename=" + nodename + "&clustername=" + clustername
-    infohash['reboot_url'] = baseurl + "?pagetype=" +NODE_PROCESS + "&task=" + NODE_REBOOT + "&nodename=" + nodename + "&clustername=" + clustername
-    infohash['fence_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + nodename + "&clustername=" + clustername
-    infohash['delete_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_DELETE + "&nodename=" + nodename + "&clustername=" + clustername
+    infohash['jl_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+      % (baseurl, NODE_PROCESS, NODE_LEAVE_CLUSTER, nodename, clustername)
+    infohash['reboot_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+      % (baseurl, NODE_PROCESS, NODE_REBOOT, nodename, clustername)
+    infohash['fence_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+      % (baseurl, NODE_PROCESS, NODE_FENCE, nodename, clustername)
+    infohash['delete_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+      % (baseurl, NODE_PROCESS, NODE_DELETE, nodename, clustername)
   elif nodestate == NODE_INACTIVE:
-    infohash['jl_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_JOIN_CLUSTER + "&nodename=" + nodename + "&clustername=" + clustername
-    infohash['reboot_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_REBOOT + "&nodename=" + nodename + "&clustername=" + clustername
-    infohash['fence_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + nodename + "&clustername=" + clustername
-    infohash['delete_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_DELETE + "&nodename=" + nodename + "&clustername=" + clustername
+    infohash['jl_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+      % (baseurl, NODE_PROCESS, NODE_JOIN_CLUSTER, nodename, clustername)
+    infohash['reboot_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+      % (baseurl, NODE_PROCESS, NODE_REBOOT, nodename, clustername)
+    infohash['fence_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+      % (baseurl, NODE_PROCESS, NODE_FENCE, nodename, clustername)
+    infohash['delete_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+      % (baseurl, NODE_PROCESS, NODE_DELETE, nodename, clustername)
   else:
-    infohash['fence_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + nodename + "&clustername=" + clustername
+    infohash['fence_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+      % (baseurl, NODE_PROCESS, NODE_FENCE, nodename, clustername)
 
   #figure out current services running on this node
   svc_dict_list = list()
@@ -4788,7 +4772,8 @@
     if svc['nodename'] == nodename:
       svc_dict = {}
       svcname = svc['name']
-      svcurl = baseurl + "?" + PAGETYPE + "=" + SERVICE + "&" + CLUNAME + "=" + clustername + "&servicename=" + svcname
+      svcurl = '%s?pagetype=%s&clustername=%s&servicename=%s' \
+        % (baseurl, SERVICE, clustername, svcname)
       svc_dict['servicename'] = svcname
       svc_dict['svcurl'] = svcurl
       svc_dict_list.append(svc_dict)
@@ -4808,7 +4793,8 @@
     for fdom in fdoms:
       fdom_dict = {}
       fdom_dict['name'] = fdom.getName()
-      fdomurl = baseurl + "?" + PAGETYPE + "=" + FDOM_CONFIG + "&" + CLUNAME + "=" + clustername + "&fdomname=" + fdom.getName()
+      fdomurl = '%s?pagetype=%s&clustername=%s&fdomname=%s' \
+		% (baseurl, FDOM_CONFIG, clustername, fdom.getName())
       fdom_dict['fdomurl'] = fdomurl
       fdom_dict_list.append(fdom_dict)
   else:
@@ -4842,15 +4828,13 @@
       else:
         dlist.append("lock_gulmd")
       dlist.append("rgmanager")
-      dlist.append("clvmd")
-      dlist.append("gfs")
-      dlist.append("gfs2")
-      states = getDaemonStates(rc, dlist)
+      states = rq.getDaemonStates(rc, dlist)
       infohash['d_states'] = states
   else:
     infohash['ricci_error'] = True
 
-  infohash['logurl'] = '/luci/logs/?nodename=' + nodename_resolved + '&clustername=' + clustername
+  infohash['logurl'] = '/luci/logs/?nodename=%s&clustername=%s' \
+	% (nodename_resolved, clustername)
   return infohash
 
 def getNodesInfo(self, model, status, req):
@@ -4886,50 +4870,60 @@
           return {}
 
   for item in nodelist:
-    map = {}
+    nl_map = {}
     name = item['name']
-    map['nodename'] = name
+    nl_map['nodename'] = name
     try:
-      map['gulm_lockserver'] = model.isNodeLockserver(name)
+      nl_map['gulm_lockserver'] = model.isNodeLockserver(name)
     except:
-      map['gulm_lockserver'] = False
+      nl_map['gulm_lockserver'] = False
 
     try:
       baseurl = req['URL']
     except:
       baseurl = '/luci/cluster/index_html'
 
-    cfgurl = baseurl + "?" + PAGETYPE + "=" + NODE + "&" + CLUNAME + "=" + clustername + "&nodename=" + name
-
-    map['configurl'] = cfgurl
-    map['fenceurl'] = cfgurl + "#fence"
+    cfgurl = '%s?pagetype=%s&clustername=%s&nodename=%s' \
+      % (baseurl, NODE, clustername, name)
+    nl_map['configurl'] = cfgurl
+    nl_map['fenceurl'] = '%s#fence' % cfgurl
     if item['clustered'] == "true":
-      map['status'] = NODE_ACTIVE
-      map['status_str'] = NODE_ACTIVE_STR
+      nl_map['status'] = NODE_ACTIVE
+      nl_map['status_str'] = NODE_ACTIVE_STR
     elif item['online'] == "false":
-      map['status'] = NODE_UNKNOWN
-      map['status_str'] = NODE_UNKNOWN_STR
+      nl_map['status'] = NODE_UNKNOWN
+      nl_map['status_str'] = NODE_UNKNOWN_STR
     else:
-      map['status'] = NODE_INACTIVE
-      map['status_str'] = NODE_INACTIVE_STR
+      nl_map['status'] = NODE_INACTIVE
+      nl_map['status_str'] = NODE_INACTIVE_STR
 
     nodename_resolved = resolve_nodename(self, clustername, name)
 
-    map['logurl'] = '/luci/logs?nodename=' + nodename_resolved + '&clustername=' + clustername
+    nl_map['logurl'] = '/luci/logs?nodename=%s&clustername=%s' \
+		% (nodename_resolved, clustername)
 
     #set up URLs for dropdown menu...
-    if map['status'] == NODE_ACTIVE:
-      map['jl_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_LEAVE_CLUSTER + "&nodename=" + name + "&clustername=" + clustername
-      map['reboot_url'] = baseurl + "?pagetype=" +NODE_PROCESS + "&task=" + NODE_REBOOT + "&nodename=" + name + "&clustername=" + clustername
-      map['fence_it_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + name + "&clustername=" + clustername
-      map['delete_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_DELETE + "&nodename=" + name + "&clustername=" + clustername
-    elif map['status'] == NODE_INACTIVE:
-      map['jl_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_JOIN_CLUSTER + "&nodename=" + name + "&clustername=" + clustername
-      map['reboot_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_REBOOT + "&nodename=" + name + "&clustername=" + clustername
-      map['fence_it_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + name + "&clustername=" + clustername
-      map['delete_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_DELETE + "&nodename=" + name + "&clustername=" + clustername
+    if nl_map['status'] == NODE_ACTIVE:
+      nl_map['jl_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+        % (baseurl, NODE_PROCESS, NODE_LEAVE_CLUSTER, name, clustername)
+      nl_map['reboot_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+        % (baseurl, NODE_PROCESS, NODE_REBOOT, name, clustername)
+      nl_map['fence_it_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+        % (baseurl, NODE_PROCESS, NODE_FENCE, name, clustername)
+      nl_map['delete_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+        % (baseurl, NODE_PROCESS, NODE_DELETE, name, clustername)
+    elif nl_map['status'] == NODE_INACTIVE:
+      nl_map['jl_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+        % (baseurl, NODE_PROCESS, NODE_JOIN_CLUSTER, name, clustername)
+      nl_map['reboot_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+        % (baseurl, NODE_PROCESS, NODE_REBOOT, name, clustername)
+      nl_map['fence_it_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+        % (baseurl, NODE_PROCESS, NODE_FENCE, name, clustername)
+      nl_map['delete_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+        % (baseurl, NODE_PROCESS, NODE_DELETE, name, clustername)
     else:
-      map['fence_it_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + name + "&clustername=" + clustername
+      nl_map['fence_it_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+        % (baseurl, NODE_PROCESS, NODE_FENCE, name, clustername)
 
     #figure out current services running on this node
     svc_dict_list = list()
@@ -4937,29 +4931,31 @@
       if svc['nodename'] == name:
         svc_dict = {}
         svcname = svc['name']
-        svcurl = baseurl + "?" + PAGETYPE + "=" + SERVICE + "&" + CLUNAME + "=" + clustername + "&servicename=" + svcname
+        svcurl = '%s?pagetype=%s&clustername=%s&servicename=%s' \
+          % (baseurl, SERVICE, clustername, svcname)
         svc_dict['servicename'] = svcname
         svc_dict['svcurl'] = svcurl
         svc_dict_list.append(svc_dict)
 
-    map['currentservices'] = svc_dict_list
+    nl_map['currentservices'] = svc_dict_list
     #next is faildoms
 
     if model:
       fdoms = model.getFailoverDomainsForNode(name)
     else:
-      map['ricci_error'] = True
+      nl_map['ricci_error'] = True
       fdoms = list()
     fdom_dict_list = list()
     for fdom in fdoms:
       fdom_dict = {}
       fdom_dict['name'] = fdom.getName()
-      fdomurl = baseurl + "?" + PAGETYPE + "=" + FDOM_CONFIG + "&" + CLUNAME + "=" + clustername + "&fdomname=" + fdom.getName()
+      fdomurl = '%s?pagetype=%s&clustername=%s&fdomname=%s' \
+		% (baseurl, FDOM_CONFIG, clustername, fdom.getName())
       fdom_dict['fdomurl'] = fdomurl
       fdom_dict_list.append(fdom_dict)
 
-    map['fdoms'] = fdom_dict_list
-    resultlist.append(map)
+    nl_map['fdoms'] = fdom_dict_list
+    resultlist.append(nl_map)
 
   return resultlist
 
@@ -4968,17 +4964,17 @@
     luci_log.debug_verbose('getFence0: model is None')
     return {}
 
-  map = {}
+  fence_map = {}
   fencename = request['fencename']
   fencedevs = model.getFenceDevices()
   for fencedev in fencedevs:
     if fencedev.getName().strip() == fencename:
-      map = fencedev.getAttributes()
+      fence_map = fencedev.getAttributes()
       try:
-        map['pretty_name'] = FENCE_OPTS[fencedev.getAgentType()]
+        fence_map['pretty_name'] = FENCE_OPTS[fencedev.getAgentType()]
       except:
-        map['unknown'] = True
-        map['pretty_name'] = fencedev.getAgentType()
+        fence_map['unknown'] = True
+        fence_map['pretty_name'] = fencedev.getAgentType()
 
       nodes_used = list()
       nodes = model.getNodes()
@@ -4998,14 +4994,16 @@
               baseurl = request['URL']
               clustername = model.getClusterName()
               node_hash = {}
-              node_hash['nodename'] = node.getName().strip()
-              node_hash['nodeurl'] = baseurl + "?clustername=" + clustername + "&nodename=" + node.getName() + "&pagetype=" + NODE
+              cur_nodename = node.getName().strip()
+              node_hash['nodename'] = cur_nodename
+              node_hash['nodeurl'] = '%s?clustername=%s&nodename=%s&pagetype=%s' \
+                % (baseurl, clustername, cur_nodename, NODE)
               nodes_used.append(node_hash)
 
-      map['nodesused'] = nodes_used
-      return map
+      fence_map['nodesused'] = nodes_used
+      return fence_map
 
-  return map
+  return fence_map
 
 def getFDForInstance(fds, name):
   for fd in fds:
@@ -5034,15 +5032,15 @@
     luci_log.debug_verbose('getFenceInfo1: no request.URL')
     return {}
 
-  map = {}
+  fence_map = {}
   level1 = list() #First level fence devices
   level2 = list() #Second level fence devices
   shared1 = list() #List of available sharable fence devs not used in level1
   shared2 = list() #List of available sharable fence devs not used in level2
-  map['level1'] = level1
-  map['level2'] = level2
-  map['shared1'] = shared1
-  map['shared2'] = shared2
+  fence_map['level1'] = level1
+  fence_map['level2'] = level2
+  fence_map['shared1'] = shared1
+  fence_map['shared2'] = shared2
 
   major_num = 1
   minor_num = 100
@@ -5074,7 +5072,7 @@
   len_levels = len(levels)
 
   if len_levels == 0:
-    return map
+    return fence_map
 
   if len_levels >= 1:
     first_level = levels[0]
@@ -5139,7 +5137,8 @@
               fencedev['unknown'] = True
               fencedev['prettyname'] = fd.getAgentType()
             fencedev['isShared'] = True
-            fencedev['cfgurl'] = baseurl + "?clustername=" + clustername + "&fencename=" + fd.getName().strip() + "&pagetype=" + FENCEDEV
+            fencedev['cfgurl'] = '%s?clustername=%s&fencename=%s&pagetype=%s' \
+              % (baseurl, clustername, fd.getName().strip(), FENCEDEV)
             fencedev['id'] = str(major_num)
             major_num = major_num + 1
             inlist = list()
@@ -5159,7 +5158,7 @@
             level1.append(fencedev)
             last_kid_fd = fencedev
             continue
-    map['level1'] = level1
+    fence_map['level1'] = level1
 
     #level1 list is complete now, but it is still necessary to build shared1
     for fd in fds:
@@ -5181,7 +5180,7 @@
           shared_struct['unknown'] = True
           shared_struct['prettyname'] = agentname
         shared1.append(shared_struct)
-    map['shared1'] = shared1
+    fence_map['shared1'] = shared1
 
   #YUK: This next section violates the DRY rule, :-(
   if len_levels >= 2:
@@ -5246,7 +5245,8 @@
               fencedev['unknown'] = True
               fencedev['prettyname'] = fd.getAgentType()
             fencedev['isShared'] = True
-            fencedev['cfgurl'] = baseurl + "?clustername=" + clustername + "&fencename=" + fd.getName().strip() + "&pagetype=" + FENCEDEV
+            fencedev['cfgurl'] = '%s?clustername=%s&fencename=%s&pagetype=%s' \
+              % (baseurl, clustername, fd.getName().strip(), FENCEDEV)
             fencedev['id'] = str(major_num)
             major_num = major_num + 1
             inlist = list()
@@ -5266,7 +5266,7 @@
             level2.append(fencedev)
             last_kid_fd = fencedev
             continue
-    map['level2'] = level2
+    fence_map['level2'] = level2
 
     #level2 list is complete but like above, we need to build shared2
     for fd in fds:
@@ -5288,16 +5288,16 @@
           shared_struct['unknown'] = True
           shared_struct['prettyname'] = agentname
         shared2.append(shared_struct)
-    map['shared2'] = shared2
+    fence_map['shared2'] = shared2
 
-  return map
+  return fence_map
 
 def getFencesInfo(self, model, request):
-  map = {}
+  fences_map = {}
   if not model:
     luci_log.debug_verbose('getFencesInfo0: model is None')
-    map['fencedevs'] = list()
-    return map
+    fences_map['fencedevs'] = list()
+    return fences_map
 
   clustername = request['clustername']
   baseurl = request['URL']
@@ -5325,7 +5325,8 @@
 
       fencedev['agent'] = fd.getAgentType()
       #Add config url for this fencedev
-      fencedev['cfgurl'] = baseurl + "?clustername=" + clustername + "&fencename=" + fd.getName().strip() + "&pagetype=" + FENCEDEV
+      fencedev['cfgurl'] = '%s?clustername=%s&fencename=%s&pagetype=%s' \
+        % (baseurl, clustername, fd.getName().strip(), FENCEDEV)
 
       nodes = model.getNodes()
       for node in nodes:
@@ -5342,15 +5343,17 @@
               if found_duplicate == True:
                 continue
               node_hash = {}
-              node_hash['nodename'] = node.getName().strip()
-              node_hash['nodeurl'] = baseurl + "?clustername=" + clustername + "&nodename=" + node.getName() + "&pagetype=" + NODE
+              cur_nodename = node.getName().strip()
+              node_hash['nodename'] = cur_nodename
+              node_hash['nodeurl'] = '%s?clustername=%s&nodename=%s&pagetype=%s' \
+                % (baseurl, clustername, cur_nodename, NODE)
               nodes_used.append(node_hash)
 
       fencedev['nodesused'] = nodes_used
       fencedevs.append(fencedev)
 
-  map['fencedevs'] = fencedevs
-  return map
+  fences_map['fencedevs'] = fencedevs
+  return fences_map
 
 def getLogsForNode(self, request):
 	try:
@@ -5408,349 +5411,364 @@
 
 		return 'Luci is not authenticated to node %s. Please reauthenticate first.' % nodename
 
-	return getNodeLogs(rc)
+	return rq.getNodeLogs(rc)
 
 def getVMInfo(self, model, request):
-  map = {}
-  baseurl = request['URL']
-  clustername = request['clustername']
-  svcname = None
+	vm_map = {}
 
-  try:
-    svcname = request['servicename']
-  except KeyError, e:
-    svcname = None
-  urlstring = baseurl + "?" + clustername + "&pagetype=29"
-  if svcname != None:
-    urlstring = urlstring + "&servicename=" + svcname
+	try:
+		clustername = request['clustername']
+	except Exception, e:
+		try:
+			clustername = model.getName()
+		except:
+			return vm_map
+
+	svcname = None
+	try:
+		svcname = request['servicename']
+	except Exception, e:
+		try:
+			vmname = request.form['servicename']
+		except Exception, e:
+			return vm_map
 
-  map['formurl'] = urlstring
+	vm_map['formurl'] = '%s?clustername=%s&pagetype=29&servicename=%s' \
+		% (request['URL'], clustername, svcname)
 
-  try:
-    vmname = request['servicename']
-  except:
-    try:
-      vmname = request.form['servicename']
-    except:
-      luci_log.debug_verbose('servicename is missing from request')
-      return map
+	try:
+		vm = model.retrieveVMsByName(vmname)
+	except:
+		luci_log.debug('An error occurred while attempting to get VM %s' \
+			% vmname)
+		return vm_map
 
-  try:
-    vm = model.retrieveVMsByName(vmname)
-  except:
-    luci_log.debug('An error occurred while attempting to get VM %s' \
-      % vmname)
-    return map
-
-  attrs = vm.getAttributes()
-  keys = attrs.keys()
-  for key in keys:
-    map[key] = attrs[key]
-  return map
+	attrs = vm.getAttributes()
+	keys = attrs.keys()
+	for key in keys:
+		vm_map[key] = attrs[key]
+
+	return vm_map
 
 def isClusterBusy(self, req):
-  items = None
-  map = {}
-  isBusy = False
-  redirect_message = False
-  nodereports = list()
-  map['nodereports'] = nodereports
+	items = None
+	busy_map = {}
+	isBusy = False
+	redirect_message = False
+	nodereports = list()
+	busy_map['nodereports'] = nodereports
 
-  try:
-    cluname = req['clustername']
-  except KeyError, e:
-    try:
-      cluname = req.form['clustername']
-    except:
-      try:
-        cluname = req.form['clusterName']
-      except:
-        luci_log.debug_verbose('ICB0: No cluster name -- returning empty map')
-        return map
+	try:
+		cluname = req['clustername']
+	except KeyError, e:
+		try:
+			cluname = req.form['clustername']
+		except:
+			try:
+				cluname = req.form['clusterName']
+			except:
+				luci_log.debug_verbose('ICB0: No cluster name -- returning empty map')
+				return busy_map
 
-  path = str(CLUSTER_FOLDER_PATH + cluname)
-  try:
-    clusterfolder = self.restrictedTraverse(path)
-    if not clusterfolder:
-      raise Exception, 'clusterfolder is None'
-  except Exception, e:
-    luci_log.debug_verbose('ICB1: cluster %s [%s] folder missing: %s -- returning empty map' % (cluname, path, str(e)))
-    return map
-  except:
-    luci_log.debug_verbose('ICB2: cluster %s [%s] folder missing: returning empty map' % (cluname, path))
+	path = '%s%s' % (CLUSTER_FOLDER_PATH, cluname)
 
-  try:
-    items = clusterfolder.objectItems('ManagedSystem')
-    if not items or len(items) < 1:
-      luci_log.debug_verbose('ICB3: NOT BUSY: no flags at %s for cluster %s' \
-          % (cluname, path))
-      return map  #This returns an empty map, and should indicate not busy
-  except Exception, e:
-    luci_log.debug('ICB4: An error occurred while looking for cluster %s flags at path %s: %s' % (cluname, path, str(e)))
-    return map
-  except:
-    luci_log.debug('ICB5: An error occurred while looking for cluster %s flags at path %s' % (cluname, path))
-    return map
+	try:
+		clusterfolder = self.restrictedTraverse(path)
+		if not clusterfolder:
+			raise Exception, 'clusterfolder is None'
+	except Exception, e:
+		luci_log.debug_verbose('ICB1: cluster %s [%s] folder missing: %s -- returning empty map' % (cluname, path, str(e)))
+		return busy_map
+	except:
+		luci_log.debug_verbose('ICB2: cluster %s [%s] folder missing: returning empty map' % (cluname, path))
 
-  luci_log.debug_verbose('ICB6: %s is busy: %d flags' \
-      % (cluname, len(items)))
-  map['busy'] = "true"
-  #Ok, here is what is going on...if there is an item,
-  #we need to call the ricci_bridge and get a batch report.
-  #This report will tell us one of three things:
-  ##1) the batch task is complete...delete ManagedSystem and render
-  ##normal page
-  ##2) The batch task is NOT done, so meta refresh in 5 secs and try again
-  ##3) The ricci agent has no recollection of the task, so handle like 1 above
-  ###
-  ##Here is what we have to do:
-  ##the map should have two lists:
-  ##One list of non-cluster create tasks
-  ##and one of cluster create task structs
-  ##For each item in items, check if this is a cluster create tasktype
-  ##If so, call RC, and then call stan's batch report method
-  ##check for error...if error, report and then remove flag.
-  ##if no error, check if complete. If not complete, report status
-  ##If complete, report status and remove flag.
-
-  for item in items:
-    tasktype = item[1].getProperty(TASKTYPE)
-    if tasktype == CLUSTER_ADD or tasktype == NODE_ADD:
-      node_report = {}
-      node_report['isnodecreation'] = True
-      node_report['iserror'] = False  #Default value
-      node_report['desc'] = item[1].getProperty(FLAG_DESC)
-      batch_xml = None
-      ricci = item[0].split("____") #This removes the 'flag' suffix
+	try:
+		items = clusterfolder.objectItems('ManagedSystem')
+		if not items or len(items) < 1:
+			luci_log.debug_verbose('ICB3: NOT BUSY: no flags at %s for cluster %s' % (cluname, path))
+			# This returns an empty map, and indicates not busy
+			return busy_map
+	except Exception, e:
+		luci_log.debug('ICB4: An error occurred while looking for cluster %s flags at path %s: %s' % (cluname, path, str(e)))
+		return busy_map
+	except:
+		luci_log.debug('ICB5: An error occurred while looking for cluster %s flags at path %s' % (cluname, path))
+		return busy_map
+
+	luci_log.debug_verbose('ICB6: %s is busy: %d flags' \
+		% (cluname, len(items)))
+	busy_map['busy'] = 'true'
+
+	# Ok, here is what is going on...if there is an item,
+	# we need to call ricci to get a batch report.
+	# This report will tell us one of three things:
+	#
+	# #1) the batch task is complete...delete ManagedSystem and render
+	#     normal page
+	# #2) The batch task is NOT done, so meta refresh in 5 secs and try again
+	# #3) The ricci agent has no recollection of the task,
+	#     so handle like 1 above
+	###
+	#
+	# Here is what we have to do:
+	# the map should have two lists:
+	#  One list of non-cluster create tasks
+	#  and one of cluster create task structs
+	# For each item in items, check if this is a cluster create tasktype
+	# If so, call RC, and then call the batch report method
+	# check for error...if error, report and then remove flag.
+	# if no error, check if complete. If not complete, report status
+	# If complete, report status and remove flag.
 
-      luci_log.debug_verbose('ICB6A: using host %s for rc for item %s' \
-          % (ricci[0], item[0]))
-      try:
-        rc = RicciCommunicator(ricci[0])
-        if not rc:
-          rc = None
-          luci_log.debug_verbose('ICB6b: rc is none')
-      except Exception, e:
-        rc = None
-        luci_log.debug_verbose('ICB7: RC: %s: %s' \
-          % (cluname, str(e)))
+	for item in items:
+		tasktype = item[1].getProperty(TASKTYPE)
+		if tasktype == CLUSTER_ADD or tasktype == NODE_ADD:
+			node_report = {}
+			node_report['isnodecreation'] = True
+			node_report['iserror'] = False  #Default value
+			node_report['desc'] = item[1].getProperty(FLAG_DESC)
+			batch_xml = None
+			# This removes the 'flag' suffix
+			ricci = item[0].split('____')
 
-      batch_id = None
-      if rc is not None:
-        try:
-          batch_id = item[1].getProperty(BATCH_ID)
-          luci_log.debug_verbose('ICB8: got batch_id %s from %s' \
-              % (batch_id, item[0]))
-        except Exception, e:
-          try:
-            luci_log.debug_verbose('ICB8B: failed to get batch_id from %s: %s' \
-                % (item[0], str(e)))
-          except:
-            luci_log.debug_verbose('ICB8C: failed to get batch_id from %s' \
-              % item[0])
+			luci_log.debug_verbose('ICB6A: using host %s for rc for item %s' \
+				% (ricci[0], item[0]))
 
-        if batch_id is not None:
-          try:
-            batch_xml = rc.batch_report(batch_id)
-            if batch_xml is not None:
-              luci_log.debug_verbose('ICB8D: batch_xml for %s from batch_report is not None -- getting batch status' % batch_id)
-              (creation_status, total) = batch_status(batch_xml)
-              try:
-                luci_log.debug_verbose('ICB8E: batch status returned (%d,%d)' \
-                    % (creation_status, total))
-              except:
-                luci_log.debug_verbose('ICB8F: error logging batch status return')
-            else:
-              luci_log.debug_verbose('ICB9: batch_xml for cluster is None')
-          except Exception, e:
-            luci_log.debug_verbose('ICB9A: error getting batch_xml from rc.batch_report: %s' % str(e))
-            creation_status = RICCI_CONNECT_FAILURE  #No contact with ricci (-1000)
-            batch_xml = "bloody_failure" #set to avoid next if statement
-
-      if rc is None or batch_id is None:
-          luci_log.debug_verbose('ICB12: unable to connect to a ricci agent for cluster %s to get batch status')
-          creation_status = RICCI_CONNECT_FAILURE  #No contact with ricci (-1000)
-          batch_xml = "bloody_bloody_failure" #set to avoid next if statement
-
-      if batch_xml is None:  #The job is done and gone from queue
-        if redirect_message == False: #We have not displayed this message yet
-          node_report['desc'] = REDIRECT_MSG
-          node_report['iserror'] = True
-          node_report['errormessage'] = ""
-          nodereports.append(node_report)
-          redirect_message = True
+			try:
+				rc = RicciCommunicator(ricci[0])
+				if not rc:
+					rc = None
+					luci_log.debug_verbose('ICB6b: rc is none')
+			except Exception, e:
+				rc = None
+				luci_log.debug_verbose('ICB7: RC: %s: %s' % (cluname, str(e)))
 
-        luci_log.debug_verbose('ICB13: batch job is done -- deleting %s' % item[0])
-        clusterfolder.manage_delObjects([item[0]])
-        continue
+			batch_id = None
+			if rc is not None:
+				try:
+					batch_id = item[1].getProperty(BATCH_ID)
+					luci_log.debug_verbose('ICB8: got batch_id %s from %s' \
+						% (batch_id, item[0]))
+				except Exception, e:
+					try:
+						luci_log.debug_verbose('ICB8B: failed to get batch_id from %s: %s' % (item[0], str(e)))
+					except:
+						luci_log.debug_verbose('ICB8C: failed to get batch_id from %s' % item[0])
 
-      del_db_obj = False
-      if creation_status < 0:  #an error was encountered
-        luci_log.debug_verbose('ICB13a: %s: CS %d for %s' % (cluname, creation_status, ricci[0]))
-        if creation_status == RICCI_CONNECT_FAILURE:
-          laststatus = item[1].getProperty(LAST_STATUS)
-          if laststatus == INSTALL_TASK: #This means maybe node is rebooting
-            node_report['statusindex'] = INSTALL_TASK
-            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + POSSIBLE_REBOOT_MESSAGE
-          elif laststatus == 0:
-            node_report['statusindex'] = 0
-            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_INSTALL
-          elif laststatus == DISABLE_SVC_TASK:
-            node_report['statusindex'] = DISABLE_SVC_TASK
-            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_CFG
-          elif laststatus == REBOOT_TASK:
-            node_report['statusindex'] = REBOOT_TASK
-            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_CFG
-          elif laststatus == SEND_CONF:
-            node_report['statusindex'] = SEND_CONF
-            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_JOIN
-          elif laststatus == ENABLE_SVC_TASK:
-            node_report['statusindex'] = ENABLE_SVC_TASK
-            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_JOIN
-          else:
-            node_report['statusindex'] = 0
-            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + ' Install is in an unknown state.'
-          nodereports.append(node_report)
-          continue
-        elif creation_status == -(INSTALL_TASK):
-          node_report['iserror'] = True
-          (err_code, err_msg) = extract_module_status(batch_xml, INSTALL_TASK)
-          node_report['errormessage'] = CLUNODE_CREATE_ERRORS[INSTALL_TASK] + err_msg
-          del_db_obj = True
-        elif creation_status == -(DISABLE_SVC_TASK):
-          node_report['iserror'] = True
-          (err_code, err_msg) = extract_module_status(batch_xml, DISABLE_SVC_TASK)
-          node_report['errormessage'] = CLUNODE_CREATE_ERRORS[DISABLE_SVC_TASK] + err_msg
-          del_db_obj = True
-        elif creation_status == -(REBOOT_TASK):
-          node_report['iserror'] = True
-          (err_code, err_msg) = extract_module_status(batch_xml, REBOOT_TASK)
-          node_report['errormessage'] = CLUNODE_CREATE_ERRORS[REBOOT_TASK] + err_msg
-          del_db_obj = True
-        elif creation_status == -(SEND_CONF):
-          node_report['iserror'] = True
-          (err_code, err_msg) = extract_module_status(batch_xml, SEND_CONF)
-          node_report['errormessage'] = CLUNODE_CREATE_ERRORS[SEND_CONF] + err_msg
-        elif creation_status == -(ENABLE_SVC_TASK):
-          node_report['iserror'] = True
-          (err_code, err_msg) = extract_module_status(batch_xml, DISABLE_SVC_TASK)
-          node_report['errormessage'] = CLUNODE_CREATE_ERRORS[ENABLE_SVC_TASK] + err_msg
-        elif creation_status == -(START_NODE):
-          node_report['iserror'] = True
-          (err_code, err_msg) = extract_module_status(batch_xml, START_NODE)
-          node_report['errormessage'] = CLUNODE_CREATE_ERRORS[START_NODE]
-        else:
-          del_db_obj = True
-          node_report['iserror'] = True
-          node_report['errormessage'] = CLUNODE_CREATE_ERRORS[0]
+				if batch_id is not None:
+					try:
+						batch_xml = rc.batch_report(batch_id)
+						if batch_xml is not None:
+							luci_log.debug_verbose('ICB8D: batch_xml for %s from batch_report is not None -- getting batch status' % batch_id)
+							(creation_status, total) = batch_status(batch_xml)
+							try:
+								luci_log.debug_verbose('ICB8E: batch status returned (%d,%d)' % (creation_status, total))
+							except:
+								luci_log.debug_verbose('ICB8F: error logging batch status return')
+						else:
+							luci_log.debug_verbose('ICB9: batch_xml for cluster is None')
+					except Exception, e:
+						luci_log.debug_verbose('ICB9A: error getting batch_xml from rc.batch_report: %s' % str(e))
+					# No contact with ricci (-1000)
+					creation_status = RICCI_CONNECT_FAILURE
+					# set to avoid next if statement
+					batch_xml = 'bloody_failure'
+
+			if rc is None or batch_id is None:
+				luci_log.debug_verbose('ICB12: unable to connect to a ricci agent for cluster %s to get batch status')
+				# No contact with ricci (-1000)
+				creation_status = RICCI_CONNECT_FAILURE
+				# set to avoid next if statement
+				batch_xml = 'bloody_bloody_failure'
+
+			if batch_xml is None:
+				# The job is done and gone from queue
+				if redirect_message == False:
+					# We have not displayed this message yet
+					node_report['desc'] = REDIRECT_MSG
+					node_report['iserror'] = True
+					node_report['errormessage'] = ""
+					nodereports.append(node_report)
+					redirect_message = True
 
-        try:
-          if del_db_obj is True:
-            luci_log.debug_verbose('ICB13a: %s node creation failed for %s: %d: deleting DB entry' % (cluname, ricci[0], creation_status))
-            clusterfolder.manage_delObjects([ricci[0]])
-          clusterfolder.manage_delObjects([item[0]])
-        except Exception, e:
-          luci_log.debug_verbose('ICB14: delObjects: %s: %s' \
-            % (item[0], str(e)))
+				luci_log.debug_verbose('ICB13: batch job is done -- deleting %s' % item[0])
+				clusterfolder.manage_delObjects([item[0]])
+				continue
 
-        nodereports.append(node_report)
-        continue
-      else:  #either batch completed successfully, or still running
-        if creation_status == total:  #finished...
-          map['busy'] = "true"
-          node_report['statusmessage'] = "Node created successfully" + REDIRECT_MSG
-          node_report['statusindex'] = creation_status
-          nodereports.append(node_report)
-          try:
-              clusterfolder.manage_delObjects([item[0]])
-          except Exception, e:
-              luci_log.info('ICB15: Unable to delete %s: %s' % (item[0], str(e)))
-          continue
-        else:
-          map['busy'] = "true"
-          isBusy = True
-          node_report['statusmessage'] = "Node still being created"
-          node_report['statusindex'] = creation_status
-          nodereports.append(node_report)
-          propslist = list()
-          propslist.append(LAST_STATUS)
-          try:
-            item[1].manage_delProperties(propslist)
-            item[1].manage_addProperty(LAST_STATUS, creation_status, "int")
-          except Exception, e:
-            luci_log.debug_verbose('ICB16: last_status err: %s %d: %s' \
-              % (item[0], creation_status, str(e)))
-          continue
+			del_db_obj = False
+			if creation_status < 0:
+				# an error was encountered
+				luci_log.debug_verbose('ICB13a: %s: CS %d for %s' % (cluname, creation_status, ricci[0]))
+				if creation_status == RICCI_CONNECT_FAILURE:
+					laststatus = item[1].getProperty(LAST_STATUS)
+
+					if laststatus == INSTALL_TASK:
+						# The node may be rebooting
+						node_report['statusindex'] = INSTALL_TASK
+						node_report['statusmessage'] = '%s%s' % (RICCI_CONNECT_FAILURE_MSG, POSSIBLE_REBOOT_MESSAGE)
+					elif laststatus == 0:
+						# The node may be rebooting
+						node_report['statusindex'] = 0
+						node_report['statusmessage'] = '%s%s' % (RICCI_CONNECT_FAILURE_MSG, PRE_INSTALL)
+					elif laststatus == DISABLE_SVC_TASK:
+						node_report['statusindex'] = DISABLE_SVC_TASK
+						node_report['statusmessage'] = '%s%s' % (RICCI_CONNECT_FAILURE_MSG, PRE_CFG)
+					elif laststatus == REBOOT_TASK:
+						node_report['statusindex'] = REBOOT_TASK
+						node_report['statusmessage'] = '%s%s' % (RICCI_CONNECT_FAILURE_MSG, PRE_CFG)
+					elif laststatus == SEND_CONF:
+						node_report['statusindex'] = SEND_CONF
+						node_report['statusmessage'] = '%s%s' % (RICCI_CONNECT_FAILURE_MSG, PRE_JOIN)
+					elif laststatus == ENABLE_SVC_TASK:
+						node_report['statusindex'] = ENABLE_SVC_TASK
+						node_report['statusmessage'] = '%s%s' % (RICCI_CONNECT_FAILURE_MSG, PRE_JOIN)
+					else:
+						node_report['statusindex'] = 0
+						node_report['statusmessage'] = '%s Install is in an unknown state.' % RICCI_CONNECT_FAILURE_MSG
+					nodereports.append(node_report)
+					continue
+				elif creation_status == -(INSTALL_TASK):
+					node_report['iserror'] = True
+					(err_code, err_msg) = extract_module_status(batch_xml, INSTALL_TASK)
+					node_report['errormessage'] = CLUNODE_CREATE_ERRORS[INSTALL_TASK] % err_msg
+					del_db_obj = True
+				elif creation_status == -(DISABLE_SVC_TASK):
+					node_report['iserror'] = True
+					(err_code, err_msg) = extract_module_status(batch_xml, DISABLE_SVC_TASK)
+					node_report['errormessage'] = CLUNODE_CREATE_ERRORS[DISABLE_SVC_TASK] % err_msg
+					del_db_obj = True
+				elif creation_status == -(REBOOT_TASK):
+					node_report['iserror'] = True
+					(err_code, err_msg) = extract_module_status(batch_xml, REBOOT_TASK)
+					node_report['errormessage'] = CLUNODE_CREATE_ERRORS[REBOOT_TASK] % err_msg
+					del_db_obj = True
+				elif creation_status == -(SEND_CONF):
+					node_report['iserror'] = True
+					(err_code, err_msg) = extract_module_status(batch_xml, SEND_CONF)
+					node_report['errormessage'] = CLUNODE_CREATE_ERRORS[SEND_CONF] % err_msg
+				elif creation_status == -(ENABLE_SVC_TASK):
+					node_report['iserror'] = True
+					(err_code, err_msg) = extract_module_status(batch_xml, DISABLE_SVC_TASK)
+					node_report['errormessage'] = CLUNODE_CREATE_ERRORS[ENABLE_SVC_TASK] % err_msg
+				elif creation_status == -(START_NODE):
+					node_report['iserror'] = True
+					(err_code, err_msg) = extract_module_status(batch_xml, START_NODE)
+					node_report['errormessage'] = CLUNODE_CREATE_ERRORS[START_NODE] % err_msg
+				else:
+					del_db_obj = True
+					node_report['iserror'] = True
+					node_report['errormessage'] = CLUNODE_CREATE_ERRORS[0] % ''
 
-    else:
-      node_report = {}
-      node_report['isnodecreation'] = False
-      ricci = item[0].split("____") #This removes the 'flag' suffix
+				try:
+					if del_db_obj is True:
+						luci_log.debug_verbose('ICB13a: %s node creation failed for %s: %d: deleting DB entry' % (cluname, ricci[0], creation_status))
+						clusterfolder.manage_delObjects([ricci[0]])
+						clusterfolder.manage_delObjects([item[0]])
+				except Exception, e:
+					luci_log.debug_verbose('ICB14: delObjects: %s: %s' \
+						% (item[0], str(e)))
 
-      try:
-        rc = RicciCommunicator(ricci[0])
-      except Exception, e:
-        rc = None
-        finished = -1
-        err_msg = ''
-        luci_log.debug_verbose('ICB15: ricci error: %s: %s' \
-          % (ricci[0], str(e)))
-
-      if rc is not None:
-        batch_res = checkBatch(rc, item[1].getProperty(BATCH_ID))
-        finished = batch_res[0]
-        err_msg = batch_res[1]
-
-      if finished == True or finished == -1:
-        if finished == -1:
-          flag_msg = err_msg
-        else:
-          flag_msg = ''
-        flag_desc = item[1].getProperty(FLAG_DESC)
-        if flag_desc is None:
-          node_report['desc'] = flag_msg + REDIRECT_MSG
-        else:
-          node_report['desc'] = flag_msg + flag_desc + REDIRECT_MSG
-        nodereports.append(node_report)
-        try:
-            clusterfolder.manage_delObjects([item[0]])
-        except Exception, e:
-            luci_log.info('ICB16: Unable to delete %s: %s' % (item[0], str(e)))
-      else:
-        node_report = {}
-        map['busy'] = "true"
-        isBusy = True
-        node_report['desc'] = item[1].getProperty(FLAG_DESC)
-        nodereports.append(node_report)
-
-  if isBusy:
-    part1 = req['ACTUAL_URL']
-    part2 = req['QUERY_STRING']
-
-    dex = part2.find("&busyfirst")
-    if dex != (-1):
-      tmpstr = part2[:dex] #This strips off busyfirst var
-      part2 = tmpstr
-      ###FIXME - The above assumes that the 'busyfirst' query var is at the
-      ###end of the URL...
-    wholeurl = part1 + "?" + part2
-    map['refreshurl'] = "5; url=" + wholeurl
-    req['specialpagetype'] = "1"
-  else:
-    try:
-      query = req['QUERY_STRING'].replace('&busyfirst=true', '')
-      map['refreshurl'] = '5; url=' + req['ACTUAL_URL'] + '?' + query
-    except:
-      map['refreshurl'] = '5; url=/luci/cluster?pagetype=3'
-  return map
+				nodereports.append(node_report)
+				continue
+			else:
+				# either the batch completed successfully, or it's still running
+				if creation_status == total:
+					#finished...
+					busy_map['busy'] = 'true'
+					node_report['statusmessage'] = 'Node created successfully. %s' % REDIRECT_MSG
+					node_report['statusindex'] = creation_status
+					nodereports.append(node_report)
+					try:
+						clusterfolder.manage_delObjects([item[0]])
+					except Exception, e:
+						luci_log.info('ICB15: Unable to delete %s: %s' \
+							% (item[0], str(e)))
+					continue
+				else:
+					busy_map['busy'] = 'true'
+					isBusy = True
+					node_report['statusmessage'] = 'Node still being created'
+					node_report['statusindex'] = creation_status
+					nodereports.append(node_report)
+					propslist = list()
+					propslist.append(LAST_STATUS)
+					try:
+						item[1].manage_delProperties(propslist)
+						item[1].manage_addProperty(LAST_STATUS, creation_status, 'int')
+					except Exception, e:
+						luci_log.debug_verbose('ICB16: last_status err: %s %d: %s' % (item[0], creation_status, str(e)))
+					continue
+		else:
+			node_report = {}
+			node_report['isnodecreation'] = False
+			# This removes the 'flag' suffix
+			ricci = item[0].split('____')
+
+			try:
+				rc = RicciCommunicator(ricci[0])
+			except Exception, e:
+				rc = None
+				finished = -1
+				err_msg = ''
+				luci_log.debug_verbose('ICB15: ricci error: %s: %s' \
+					% (ricci[0], str(e)))
+
+			if rc is not None:
+				batch_res = rq.checkBatch(rc, item[1].getProperty(BATCH_ID))
+				finished = batch_res[0]
+				err_msg = batch_res[1]
+
+			if finished == True or finished == -1:
+				if finished == -1:
+					flag_msg = err_msg
+				else:
+					flag_msg = ''
+				flag_desc = item[1].getProperty(FLAG_DESC)
+				if flag_desc is None:
+					node_report['desc'] = '%s%s' % (flag_msg, REDIRECT_MSG)
+				else:
+					node_report['desc'] = '%s%s%s' % (flag_msg, flag_desc, REDIRECT_MSG)
+				nodereports.append(node_report)
+
+				try:
+					clusterfolder.manage_delObjects([item[0]])
+				except Exception, e:
+					luci_log.info('ICB16: Unable to delete %s: %s' \
+						% (item[0], str(e)))
+			else:
+				node_report = {}
+				busy_map['busy'] = 'true'
+				isBusy = True
+				node_report['desc'] = item[1].getProperty(FLAG_DESC)
+				nodereports.append(node_report)
+
+	if isBusy:
+		part1 = req['ACTUAL_URL']
+		part2 = req['QUERY_STRING']
+
+		dex = part2.find("&busyfirst")
+		if dex != (-1):
+			tmpstr = part2[:dex] #This strips off busyfirst var
+		part2 = tmpstr
+		###FIXME - The above assumes that the 'busyfirst' query var is@the
+		###end of the URL...
+		busy_map['refreshurl'] = '5; url=%s?%s' % (part1, part2) 
+		req['specialpagetype'] = '1'
+	else:
+		try:
+			query = req['QUERY_STRING'].replace('&busyfirst=true', '')
+			busy_map['refreshurl'] = '5; url=%s?%s' % (req['ACTUAL_URL'], query)
+		except:
+			busy_map['refreshurl'] = '5; url=/luci/cluster?pagetype=3'
+	return busy_map
 
 def getClusterOS(self, rc):
-	map = {}
+	clu_map = {}
 
 	try:
 		os_str = resolveOSType(rc.os())
-		map['os'] = os_str
-		map['isVirtualized'] = rc.dom0()
+		clu_map['os'] = os_str
+		clu_map['isVirtualized'] = rc.dom0()
 	except:
 		# default to rhel5 if something crazy happened.
 		try:
@@ -5759,9 +5777,9 @@
 			# this can throw an exception if the original exception
 			# is caused by rc being None or stale.
 			pass
-		map['os'] = 'rhel5'
-		map['isVirtualized'] = False
-	return map
+		clu_map['os'] = 'rhel5'
+		clu_map['isVirtualized'] = False
+	return clu_map
 
 def getResourcesInfo(model, request):
 	resList = list()
@@ -5778,13 +5796,17 @@
 
 	for item in model.getResources():
 		itemmap = {}
-		itemmap['name'] = item.getName()
+		cur_itemname = item.getName().strip()
+		itemmap['name'] = cur_itemname
 		itemmap['attrs'] = item.attr_hash
 		itemmap['type'] = item.resource_type
 		itemmap['tag_name'] = item.TAG_NAME
-		itemmap['cfgurl'] = baseurl + "?" + "clustername=" + cluname + "&resourcename=" + item.getName() + "&pagetype=" + RESOURCE_CONFIG
-		itemmap['url'] = baseurl + "?" + "clustername=" + cluname + "&resourcename=" + item.getName() + "&pagetype=" + RESOURCE
-		itemmap['delurl'] = baseurl + "?" + "clustername=" + cluname + "&resourcename=" + item.getName() + "&pagetype=" + RESOURCE_REMOVE
+		itemmap['cfgurl'] = '%s?clustername=%s&resourcename=%s&pagetype=%s' \
+			% (baseurl, cluname, cur_itemname, RESOURCE_CONFIG)		
+		itemmap['url'] = '%s?clustername=%s&resourcename=%s&pagetype=%s' \
+			% (baseurl, cluname, cur_itemname, RESOURCE)		
+		itemmap['delurl'] = '%s?clustername=%s&resourcename=%s&pagetype=%s' \
+			% (baseurl, cluname, cur_itemname, RESOURCE_REMOVE)		
 		resList.append(itemmap)
 	return resList
 
@@ -5833,14 +5855,17 @@
 		if res.getName() == name:
 			try:
 				resMap = {}
-				resMap['name'] = res.getName()
+				cur_resname = res.getName().strip()
+				resMap['name'] = cur_resname
 				resMap['type'] = res.resource_type
 				resMap['tag_name'] = res.TAG_NAME
 				resMap['attrs'] = res.attr_hash
-				resMap['cfgurl'] = baseurl + "?" + "clustername=" + cluname + "&resourcename=" + res.getName() + "&pagetype=" + RESOURCE_CONFIG
+				resMap['cfgurl'] = '%s?clustername=%s&resourcename=%s&pagetype=%s' \
+					% (baseurl, cluname, cur_resname, RESOURCE_CONFIG)
 				return resMap
 			except:
 				continue
+	return {}
 
 def delService(self, request):
 	errstr = 'An error occurred while attempting to set the new cluster.conf'
@@ -5894,7 +5919,7 @@
 		model.deleteService(name)
 	except Exception, e:
 		luci_log.debug_verbose('delService5: Unable to find a service named %s for cluster %s' % (name, clustername))
-		return (False, {'errors': [ '%s: error removing service %s.' % (errstr, name) ]})
+		return (False, {'errors': [ '%s: error removing service "%s."' % (errstr, name) ]})
 
 	try:
 		model.setModified(True)
@@ -5904,20 +5929,21 @@
 	except Exception, e:
 		luci_log.debug_verbose('delService6: exportModelAsString failed: %s' \
 			% str(e))
-		return (False, {'errors': [ '%s: error removing service %s.' % (errstr, name) ]})
+		return (False, {'errors': [ '%s: error removing service "%s."' % (errstr, name) ]})
 
-	batch_number, result = setClusterConf(rc, str(conf))
+	batch_number, result = rq.setClusterConf(rc, str(conf))
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('delService7: missing batch and/or result')
-		return (False, {'errors': [ '%s: error removing service %s.' % (errstr, name) ]})
+		return (False, {'errors': [ '%s: error removing service "%s."' % (errstr, name) ]})
 
 	try:
-		set_node_flag(self, clustername, ragent, str(batch_number), SERVICE_DELETE, "Removing service \'%s\'" % name)
+		set_node_flag(self, clustername, ragent, str(batch_number), SERVICE_DELETE, 'Removing service "%s"' % name)
 	except Exception, e:
 		luci_log.debug_verbose('delService8: failed to set flags: %s' % str(e))
 
 	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + SERVICES + "&clustername=" + clustername + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (request['URL'], SERVICES, clustername))
 
 def delResource(self, rc, request):
 	errstr = 'An error occurred while attempting to set the new cluster.conf'
@@ -5939,7 +5965,7 @@
 
 	if name is None:
 		luci_log.debug_verbose('delResource1: no resource name')
-		return errstr + ': no resource name was provided.'
+		return '%s: no resource name was provided.' % errstr
 
 	clustername = None
 	try:
@@ -5952,7 +5978,7 @@
 
 	if clustername is None:
 		luci_log.debug_verbose('delResource2: no cluster name for %s' % name)
-		return errstr + ': could not determine the cluster name.'
+		return '%s: could not determine the cluster name.' % errstr
 
 	try:
 		ragent = rc.hostname()
@@ -5960,7 +5986,7 @@
 			raise Exception, 'unable to determine the hostname of the ricci agent'
 	except Exception, e:
 		luci_log.debug_verbose('delResource3: %s: %s' % (errstr, str(e)))
-		return errstr + ': could not determine the ricci agent hostname'
+		return '%s: could not determine the ricci agent hostname.' % errstr
 
 	resPtr = model.getResourcesPtr()
 	resources = resPtr.getChildren()
@@ -5974,7 +6000,7 @@
 
 	if not found:
 		luci_log.debug_verbose('delResource4: cant find res %s' % name)
-		return errstr + ': the specified resource was not found.'
+		return '%s: the specified resource was not found.' % errstr
 
 	try:
 		model.setModified(True)
@@ -5986,1455 +6012,90 @@
 			% str(e))
 		return errstr
 
-	batch_number, result = setClusterConf(rc, str(conf))
+	batch_number, result = rq.setClusterConf(rc, str(conf))
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('delResource6: missing batch and/or result')
 		return errstr
 
 	try:
-		set_node_flag(self, clustername, ragent, str(batch_number), RESOURCE_REMOVE, "Removing resource \'%s\'" % request['resourcename'])
+		set_node_flag(self, clustername, ragent, str(batch_number), RESOURCE_REMOVE, 'Removing resource "%s"' % request['resourcename'])
 	except Exception, e:
 		luci_log.debug_verbose('delResource7: failed to set flags: %s' % str(e))
 
 	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + RESOURCES + "&clustername=" + clustername + '&busyfirst=true')
-
-def addIp(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addIp0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addIp1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank.'
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (request['URL'], RESOURCES, clustername))
 
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e1:
-				errors.append('No IP resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this IP resource.')
-	else:
-		try:
-			res = Ip()
-			if not res:
-				raise Exception, 'res is None'
-		except Exception, e:
-			errors.append('An error occurred while creating an IP resource.')
-			luci_log.debug_verbose('addIp3: %s' % str(e))
+def addResource(self, request, model, res):
+	clustername = model.getClusterName()
+	if not clustername:
+		luci_log.debug_verbose('addResource0: no cluname from mb')
+		return 'Unable to determine cluster name'
 
-	if not res:
-		return [None, None, errors]
+	rc = getRicciAgent(self, clustername)
+	if not rc:
+		luci_log.debug_verbose('addResource1: %s' % clustername)
+		return 'Unable to find a ricci agent for the %s cluster' % clustername
 
 	try:
-		addr = form['ip_address'].strip()
-		if not addr:
-			raise KeyError, 'ip_address is blank'
-		# XXX: validate IP addr
-		res.addAttribute('address', addr)
-	except KeyError, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addIp4: %s' % err)
-
-	if 'monitorLink' in form:
-		res.addAttribute('monitor_link', '1')
-	else:
-		res.addAttribute('monitor_link', '0')
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addFs(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addFs0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addFs1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank.'
-
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e1:
-				errors.append('No filesystem resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this filesystem resource.')
-			luci_log.debug_verbose('addFs3: %s' % str(e))
-	else:
-		try:
-			res = Fs()
-			if not res:
-				raise Exception, 'res is None'
-		except Exception, e:
-			errors.append('An error occurred while creating a filesystem resource.')
-			luci_log.debug_verbose('addFs4: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
+		model.getResourcesPtr().addChild(res)
+	except Exception, e:
+		luci_log.debug_verbose('addResource2: %s' % str(e))
+		return 'Unable to add the new resource'
 
-	# XXX: sanity check these fields
 	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this filesystem resource.'
-		res.addAttribute('name', name)
+		model.setModified(True)
+		conf = model.exportModelAsString()
+		if not conf:
+			raise Exception, 'model string for %s is blank' % clustername
 	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addFs5: %s' % err)
-
-	try:
-		mountpoint = form['mountpoint'].strip()
-		if not mountpoint:
-			raise Exception, 'No mount point was given for this filesystem resource.'
-		res.addAttribute('mountpoint', mountpoint)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addFs6: %s' % err)
+		luci_log.debug_verbose('addResource3: exportModelAsString: %s' \
+			% str(e))
+		return 'An error occurred while adding this resource'
 
 	try:
-		device = form['device'].strip()
-		if not device:
-			raise Exception, 'No device was given for this filesystem resource.'
-		res.addAttribute('device', device)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addFs7: %s' % err)
+		ragent = rc.hostname()
+		if not ragent:
+			luci_log.debug_verbose('addResource4: missing ricci hostname')
+			raise Exception, 'unknown ricci agent hostname'
 
-	try:
-		options = form['options'].strip()
-		if not options:
-			raise KeyError, 'no options'
-		res.addAttribute('options', options)
-	except KeyError, e:
-		try:
-			res.removeAttribute('options')
-		except:
-			pass
+		batch_number, result = rq.setClusterConf(rc, str(conf))
+		if batch_number is None or result is None:
+			luci_log.debug_verbose('addResource5: missing batch_number or result')
+			raise Exception, 'unable to save the new cluster configuration.'
 	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addFs8: %s' % err)
+		luci_log.debug_verbose('addResource6: %s' % str(e))
+		return 'An error occurred while propagating the new cluster.conf: %s' % str(e)
 
 	try:
-		fstype = form['fstype'].strip()
-		if not fstype:
-			raise Exception, 'No filesystem type was given for this filesystem resource.'
-		res.addAttribute('fstype', fstype)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addFs9: %s' % err)
+		try:
+			if request.form.has_key('edit'):
+				action_type = RESOURCE_CONFIG
+				action_str = 'Configuring resource "%s"' % res.getName()
+			else:
+				raise Exception, 'new'
+		except Exception, e:
+			action_type = RESOURCE_ADD
+			action_str = 'Creating new resource "%s"' % res.getName()
 
-	try:
-		fsid = form['fsid'].strip()
-		if not fsid:
-			raise Exception, 'No filesystem ID was given for this filesystem resource.'
-		fsid_int = int(fsid)
-		if not fsid_is_unique(model, fsid_int):
-			raise Exception, 'The filesystem ID provided is not unique.'
+		set_node_flag(self, clustername, ragent, str(batch_number), action_type, action_str)
 	except Exception, e:
-		fsid = str(generate_fsid(model, name))
-	res.addAttribute('fsid', fsid)
+		luci_log.debug_verbose('addResource7: failed to set flags: %s' % str(e))
 
-	if form.has_key('forceunmount'):
-		res.addAttribute('force_unmount', '1')
-	else:
-		res.addAttribute('force_unmount', '0')
+	response = request.RESPONSE
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true'
+		% (request['URL'], RESOURCES, clustername))
 
-	if form.has_key('selffence'):
-		res.addAttribute('self_fence', '1')
-	else:
-		res.addAttribute('self_fence', '0')
+def getResource(model, name):
+	resPtr = model.getResourcesPtr()
+	resources = resPtr.getChildren()
 
-	if form.has_key('checkfs'):
-		res.addAttribute('force_fsck', '1')
-	else:
-		res.addAttribute('force_fsck', '0')
+	for res in resources:
+		if res.getName() == name:
+			return res
 
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addGfs(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addGfs0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addGfs1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank'
-
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e1:
-				errors.append('No filesystem resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this cluster filesystem resource.')
-			luci_log.debug_verbose('addGfs2: %s' % str(e))
-	else:
-		try:
-			res = Clusterfs()
-			if not res:
-				raise Exception, 'res is None'
-		except Exception, e:
-			errors.append('An error occurred while creating a cluster filesystem resource.')
-			luci_log.debug('addGfs3: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	# XXX: sanity check these fields
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this cluster filesystem resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addGfs4: %s' % err)
-
-	try:
-		mountpoint = form['mountpoint'].strip()
-		if not mountpoint:
-			raise Exception, 'No mount point was given for this cluster filesystem resource.'
-		res.addAttribute('mountpoint', mountpoint)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addGfs5: %s' % err)
-
-	try:
-		device = form['device'].strip()
-		if not device:
-			raise Exception, 'No device was given for this cluster filesystem resource.'
-		res.addAttribute('device', device)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addGfs6: %s' % err)
-
-	try:
-		options = form['options'].strip()
-		if not options:
-			raise KeyError, 'no options'
-		res.addAttribute('options', options)
-	except KeyError, e:
-		try:
-			res.removeAttribute('options')
-		except:
-			pass
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addGfs7: %s' % err)
-
-	try:
-		fsid = form['fsid'].strip()
-		if not fsid:
-			raise Exception, 'No filesystem ID was given for this cluster filesystem resource.'
-		fsid_int = int(fsid)
-		if not fsid_is_unique(model, fsid_int):
-			raise Exception, 'The filesystem ID provided is not unique.'
-	except Exception, e:
-		fsid = str(generate_fsid(model, name))
-	res.addAttribute('fsid', fsid)
-
-	if form.has_key('forceunmount'):
-		res.addAttribute('force_unmount', '1')
-	else:
-		res.addAttribute('force_unmount', '0')
-
-	if len(errors) > 1:
-		return [None, None, errors]
-
-	return [res, model, None]
-
-def addNfsm(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addNfsm0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addNfsm1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank'
-
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e1:
-				errors.append('No NFS mount resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this NFS mount resource.')
-			luci_log.debug_verbose('addNfsm2: %s' % str(e))
-	else:
-		try:
-			res = Netfs()
-			if not res:
-				raise Exception, 'res is None'
-		except Exception, e:
-			errors.append('An error occurred while creating a NFS mount resource.')
-			luci_log.debug_verbose('addNfsm3: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	# XXX: sanity check these fields
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this NFS mount resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsm4: %s' % err)
-
-	try:
-		mountpoint = form['mountpoint'].strip()
-		if not mountpoint:
-			raise Exception, 'No mount point was given for NFS mount resource.'
-		res.addAttribute('mountpoint', mountpoint)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsm5: %s' % err)
-
-	try:
-		host = form['host'].strip()
-		if not host:
-			raise Exception, 'No host server was given for this NFS mount resource.'
-		res.addAttribute('host', host)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsm6 error: %s' % err)
-
-	try:
-		options = form['options'].strip()
-		if not options:
-			raise KeyError, 'no options'
-		res.addAttribute('options', options)
-	except KeyError, e:
-		try:
-			res.removeAttribute('options')
-		except:
-			pass
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsm7: %s' % err)
-
-	try:
-		exportpath = form['exportpath'].strip()
-		if not exportpath:
-			raise Exception, 'No export path was given for this NFS mount resource.'
-		res.addAttribute('exportpath', exportpath)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsm8: %s' % err)
-
-	try:
-		nfstype = form['nfstype'].strip().lower()
-		if nfstype != 'nfs' and nfstype != 'nfs4':
-			raise Exception, 'An invalid NFS version \"%s\" was given.' % nfstype
-		res.addAttribute('nfstype', nfstype)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsm9: %s' % err)
-
-	if form.has_key('forceunmount'):
-		res.addAttribute('force_unmount', '1')
-	else:
-		res.addAttribute('force_unmount', '0')
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addNfsc(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addNfsc0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addNfsc1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e1:
-				errors.append('No NFS client resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this NFS client resource.')
-			luci_log.debug_verbose('addNfsc2: %s' % str(e))
-	else:
-		try:
-			res = NFSClient()
-			if not res:
-				raise Exception, 'res is None'
-		except Exception, e:
-			errors.append('An error occurred while creating a NFS client resource.')
-			luci_log.debug_verbose('addNfsc3: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	# XXX: sanity check these fields
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this NFS client resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsc4: %s' % err)
-
-	try:
-		target = form['target'].strip()
-		if not target:
-			raise Exception, 'No target was given for NFS client resource.'
-		res.addAttribute('target', target)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsc5: %s' % err)
-
-	try:
-		options = form['options'].strip()
-		if not options:
-			raise KeyError, 'no options'
-		res.addAttribute('options', options)
-	except KeyError, e:
-		try:
-			res.removeAttribute('options')
-		except:
-			pass
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsc6: %s' % err)
-
-	if form.has_key('allow_recover'):
-		res.addAttribute('allow_recover', '1')
-	else:
-		res.addAttribute('allow_recover', '0')
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addNfsx(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addNfsx0: model is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addNfsx0: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e1:
-				errors.append('No NFS export resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this NFS export resource.')
-			luci_log.debug_verbose('addNfsx2: %s', str(e))
-	else:
-		try:
-			res = NFSExport()
-			if not res:
-				raise Exception, 'res is None'
-		except Exception, e:
-			errors.append('An error occurred while creating a NFS clientresource.')
-			luci_log.debug_verbose('addNfsx3: %s', str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this NFS export resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsx4: %s' % err)
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addScr(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addScr0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addScr1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e1:
-				errors.append('No script resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this script resource.')
-			luci_log.debug_verbose('addScr2: %s' % str(e))
-	else:
-		try:
-			res = Script()
-			if not res:
-				raise Exception, 'res is None'
-		except Exception, e:
-			errors.append('An error occurred while creating a script resource.')
-			luci_log.debug_verbose('addScr3: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this script resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addScr4: %s' % err)
-
-	try:
-		path = form['file'].strip()
-		if not path:
-			raise Exception, 'No path to a script file was given for this script resource.'
-		res.addAttribute('file', path)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addScr5: %s' % err)
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addSmb(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addSmb0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addSmb1: model is missing')
-		return None
-
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e1:
-				errors.append('No Samba resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this Samba resource.')
-			luci_log.debug_verbose('addSmb2: %s' % str(e))
-	else:
-		try:
-			res = Samba()
-			if not res:
-				raise Exception, 'res is None'
-		except Exception, e:
-			errors.append('An error occurred while creating a Samba resource.')
-			luci_log.debug_verbose('addSmb3: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this Samba resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addSmb4: %s' % err)
-
-	try:
-		workgroup = form['workgroup'].strip()
-		if not workgroup:
-			raise Exception, 'No workgroup was given for this Samba resource.'
-		res.addAttribute('workgroup', workgroup)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addSmb5: %s' % err)
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addApache(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addApache0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addApache1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank.'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e:
-				errors.append('No Apache resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this Apache resource.')
-			luci_log.debug_verbose('addApache2: %s' % str(e))
-	else:
-		try:
-			res = Apache()
-			if not res:
-				raise Exception, 'could not create Apache object'
-		except Exception, e:
-			errors.append('An error occurred while creating an Apache resource.')
-			luci_log.debug_verbose('addApache3: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this Apache resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addApache4: %s' % err)
-
-	try:
-		server_root = form['server_root'].strip()
-		if not server_root:
-			raise KeyError, 'No server root was given for this Apache resource.'
-		res.addAttribute('server_root', server_root)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addApache5: %s' % err)
-
-	try:
-		config_file = form['config_file'].strip()
-		if not server_root:
-			raise KeyError, 'No path to the Apache configuration file was given.'
-		res.addAttribute('config_file', config_file)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addApache6: %s' % err)
-
-	try:
-		options = form['httpd_options'].strip()
-		if not options:
-			raise KeyError, 'no options'
-		res.addAttribute('httpd_options', options)
-	except KeyError, e:
-		try:
-			res.removeAttribute('httpd_options')
-		except:
-			pass
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addApache7: %s' % err)
-
-	try:
-		shutdown_wait = int(form['shutdown_wait'].strip())
-		res.addAttribute('shutdown_wait', str(shutdown_wait))
-	except KeyError, e:
-		res.addAttribute('shutdown_wait', '0')
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addApache7: %s' % err)
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addMySQL(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addMySQL0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addMySQL1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank.'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e:
-				errors.append('No MySQL resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this MySQL resource.')
-			luci_log.debug_verbose('addMySQL2: %s' % str(e))
-	else:
-		try:
-			res = MySQL()
-			if not res:
-				raise Exception, 'could not create MySQL object'
-		except Exception, e:
-			errors.append('An error occurred while creating a MySQL resource.')
-			luci_log.debug_verbose('addMySQL3: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this MySQL resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addMySQL4: %s' % err)
-
-	try:
-		config_file = form['config_file'].strip()
-		if not config_file:
-			raise KeyError, 'No path to the MySQL configuration file was given.'
-		res.addAttribute('config_file', config_file)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addMySQL5: %s' % err)
-
-	try:
-		listen_addr = form['listen_address'].strip()
-		if not listen_addr:
-			raise KeyError, 'No address was given for MySQL server to listen on.'
-		res.addAttribute('listen_address', listen_addr)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addMySQL6: %s' % err)
-
-	try:
-		options = form['mysql_options'].strip()
-		if not options:
-			raise KeyError, 'no options'
-		res.addAttribute('mysql_options', options)
-	except KeyError, e:
-		try:
-			res.removeAttribute('mysql_options')
-		except:
-			pass
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addMySQL7: %s' % err)
-
-	try:
-		shutdown_wait = int(form['shutdown_wait'].strip())
-		res.addAttribute('shutdown_wait', str(shutdown_wait))
-	except KeyError, e:
-		res.addAttribute('shutdown_wait', '0')
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addMySQL7: %s' % err)
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addOpenLDAP(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addOpenLDAP0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addOpenLDAP1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank.'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e:
-				errors.append('No OpenLDAP resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this OpenLDAP resource.')
-			luci_log.debug_verbose('addOpenLDAP2: %s' % str(e))
-	else:
-		try:
-			res = OpenLDAP()
-			if not res:
-				raise Exception, 'could not create OpenLDAP object'
-		except Exception, e:
-			errors.append('An error occurred while creating an OpenLDAP resource.')
-			luci_log.debug_verbose('addOpenLDAP3: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this OpenLDAP resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addOpenLDAP4: %s' % err)
-
-	try:
-		url_list = form['url_list'].strip()
-		if not url_list:
-			raise KeyError, 'No URL list was given for this OpenLDAP resource.'
-		res.addAttribute('url_list', url_list)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addOpenLDAP5: %s' % err)
-
-	try:
-		config_file = form['config_file'].strip()
-		if not config_file:
-			raise KeyError, 'No path to the OpenLDAP configuration file was given.'
-		res.addAttribute('config_file', config_file)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addOpenLDAP6: %s' % err)
-
-	try:
-		options = form['slapd_options'].strip()
-		if not options:
-			raise KeyError, 'no options'
-		res.addAttribute('slapd_options', options)
-	except KeyError, e:
-		try:
-			res.removeAttribute('slapd_options')
-		except:
-			pass
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addOpenLDAP7: %s' % err)
-
-	try:
-		shutdown_wait = int(form['shutdown_wait'].strip())
-		res.addAttribute('shutdown_wait', str(shutdown_wait))
-	except KeyError, e:
-		res.addAttribute('shutdown_wait', '0')
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addOpenLDAP7: %s' % err)
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addPostgres8(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addPostgreSQL80: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addPostgreSQL81: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank.'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e:
-				errors.append('No PostgreSQL 8 resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this PostgreSQL 8 resource.')
-			luci_log.debug_verbose('addPostgreSQL82: %s' % str(e))
-	else:
-		try:
-			res = Postgres8()
-			if not res:
-				raise Exception, 'could not create PostgreSQL 8 object'
-		except Exception, e:
-			errors.append('An error occurred while creating a PostgreSQL 8 resource.')
-			luci_log.debug_verbose('addPostgreSQL83: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this PostgreSQL 8 resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addPostgreSQL84: %s' % err)
-
-	try:
-		user = form['postmaster_user'].strip()
-		if not user:
-			raise KeyError, 'No postmaster user was given for this PostgreSQL 8 resource.'
-		res.addAttribute('postmaster_user', user)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addPostgreSQL85: %s' % err)
-
-	try:
-		config_file = form['config_file'].strip()
-		if not config_file:
-			raise KeyError, 'No path to the PostgreSQL 8 configuration file was given.'
-		res.addAttribute('config_file', config_file)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addPostgreSQL86: %s' % err)
-
-	try:
-		options = form['postmaster_options'].strip()
-		if not options:
-			raise KeyError, 'no options'
-		res.addAttribute('postmaster_options', options)
-	except KeyError, e:
-		try:
-			res.removeAttribute('postmaster_options')
-		except:
-			pass
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addPostgreSQL87: %s' % err)
-
-	try:
-		shutdown_wait = int(form['shutdown_wait'].strip())
-		res.addAttribute('shutdown_wait', str(shutdown_wait))
-	except KeyError, e:
-		res.addAttribute('shutdown_wait', '0')
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addPostgreSQL87: %s' % err)
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addTomcat5(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addTomcat50: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addTomcat51: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank.'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e:
-				errors.append('No Tomcat 5 resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this Tomcat 5 resource.')
-			luci_log.debug_verbose('addTomcat52: %s' % str(e))
-	else:
-		try:
-			res = Tomcat5()
-			if not res:
-				raise Exception, 'could not create Tomcat5 object'
-		except Exception, e:
-			errors.append('An error occurred while creating a Tomcat 5 resource.')
-			luci_log.debug_verbose('addTomcat53: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this Tomcat 5 resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addTomcat54: %s' % err)
-
-	try:
-		user = form['tomcat_user'].strip()
-		if not user:
-			raise KeyError, 'No user was given for this Tomcat 5 resource.'
-		res.addAttribute('tomcat_user', user)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addTomcat55: %s' % err)
-
-	try:
-		config_file = form['config_file'].strip()
-		if not config_file:
-			raise KeyError, 'No path to the Tomcat 5 configuration file was given.'
-		res.addAttribute('config_file', config_file)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addTomcat56: %s' % err)
-
-	try:
-		options = form['catalina_options'].strip()
-		if not options:
-			raise KeyError, 'no options'
-		res.addAttribute('catalina_options', options)
-	except KeyError, e:
-		try:
-			res.removeAttribute('catalina_options')
-		except:
-			pass
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addTomcat57: %s' % err)
-
-	try:
-		catalina_base = form['catalina_base'].strip()
-		if not catalina_base:
-			raise KeyError, 'No cataliny base directory was given for this Tomcat 5 resource.'
-		res.addAttribute('catalina_base', catalina_base)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addTomcat58: %s' % err)
-
-	try:
-		shutdown_wait = int(form['shutdown_wait'].strip())
-		res.addAttribute('shutdown_wait', str(shutdown_wait))
-	except KeyError, e:
-		res.addAttribute('shutdown_wait', '0')
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addTomcat59: %s' % err)
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addLVM(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addLVM0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addLVM1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank.'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e:
-				errors.append('No LVM resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this LVM resource.')
-			luci_log.debug_verbose('addLVM2: %s' % str(e))
-	else:
-		try:
-			res = LVM()
-			if not res:
-				raise Exception, 'could not create LVM object'
-		except Exception, e:
-			errors.append('An error occurred while creating a LVM resource.')
-			luci_log.debug_verbose('addLVM3: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this LVM resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addLVM4: %s' % err)
-
-	try:
-		vg_name = form['vg_name'].strip()
-		if not vg_name:
-			raise KeyError, 'No volume group name was given.'
-		res.addAttribute('vg_name', vg_name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addLVM5: %s' % err)
-
-	try:
-		lv_name = form['lv_name'].strip()
-		if not lv_name:
-			raise KeyError, 'No logical volume name was given.'
-		res.addAttribute('lv_name', lv_name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addLVM6: %s' % err)
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-resourceAddHandler = {
-	'ip': addIp,
-	'fs': addFs,
-	'gfs': addGfs,
-	'nfsm': addNfsm,
-	'nfsx': addNfsx,
-	'nfsc': addNfsc,
-	'scr': addScr,
-	'smb': addSmb,
-	'tomcat-5': addTomcat5,
-	'postgres-8': addPostgres8,
-	'apache': addApache,
-	'openldap': addOpenLDAP,
-	'lvm': addLVM,
-	'mysql': addMySQL
-}
-
-def resolveClusterChanges(self, clusterName, model):
-	try:
-		mb_nodes = model.getNodes()
-		if not mb_nodes or not len(mb_nodes):
-			raise Exception, 'node list is empty'
-	except Exception, e:
-		luci_log.debug_verbose('RCC0: no model builder nodes found for %s: %s' \
-				% (str(e), clusterName))
-		return 'Unable to find cluster nodes for %s' % clusterName
-
-	try:
-		cluster_node = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/' + clusterName)
-		if not cluster_node:
-			raise Exception, 'cluster node is none'
-	except Exception, e:
-		luci_log.debug('RCC1: cant find cluster node for %s: %s'
-			% (clusterName, str(e)))
-		return 'Unable to find an entry for %s in the Luci database.' % clusterName
-
-	try:
-		db_nodes = map(lambda x: x[0], cluster_node.objectItems('Folder'))
-		if not db_nodes or not len(db_nodes):
-			raise Exception, 'no database nodes'
-	except Exception, e:
-		# Should we just create them all? Can this even happen?
-		luci_log.debug('RCC2: error: %s' % str(e))
-		return 'Unable to find database entries for any nodes in %s' % clusterName
-
-	same_host = lambda x, y: x == y or x[:len(y) + 1] == y + '.' or y[:len(x) + 1] == x + '.'
-
-	# this is a really great algorithm.
-	missing_list = list()
-	new_list = list()
-	for i in mb_nodes:
-		for j in db_nodes:
-			f = 0
-			if same_host(i, j):
-				f = 1
-				break
-		if not f:
-			new_list.append(i)
-
-	for i in db_nodes:
-		for j in mb_nodes:
-			f = 0
-			if same_host(i, j):
-				f = 1
-				break
-		if not f:
-			missing_list.append(i)
-
-	messages = list()
-	for i in missing_list:
-		try:
-			## or alternately
-			##new_node = cluster_node.restrictedTraverse(i)
-			##setNodeFlag(self, new_node, CLUSTER_NODE_NOT_MEMBER)
-			cluster_node.delObjects([i])
-			messages.append('Node \"%s\" is no longer in a member of cluster \"%s\." It has been deleted from the management interface for this cluster.' % (i, clusterName))
-			luci_log.debug_verbose('VCC3: deleted node %s' % i)
-		except Exception, e:
-			luci_log.debug_verbose('VCC4: delObjects: %s: %s' % (i, str(e)))
-
-	new_flags = CLUSTER_NODE_NEED_AUTH | CLUSTER_NODE_ADDED
-	for i in new_list:
-		try:
-			cluster_node.manage_addFolder(i, '__luci__:csystem:' + clusterName)
-			new_node = cluster_node.restrictedTraverse(i)
-			setNodeFlag(self, new_node, new_flags)
-			messages.append('A new cluster node, \"%s,\" is now a member of cluster \"%s.\" It has been added to the management interface for this cluster, but you must authenticate to it in order for it to be fully functional.' % (i, clusterName))
-		except Exception, e:
-			messages.append('A new cluster node, \"%s,\" is now a member of cluster \"%s,\". but it has not been added to the management interface for this cluster as a result of an error creating a database entry for it.' % (i, clusterName))
-			luci_log.debug_verbose('VCC5: addFolder: %s/%s: %s' \
-				% (clusterName, i, str(e)))
-
-	return messages
-
-def addResource(self, request, model, res, res_type):
-	clustername = model.getClusterName()
-	if not clustername:
-		luci_log.debug_verbose('addResource0: no cluname from mb')
-		return 'Unable to determine cluster name'
-
-	rc = getRicciAgent(self, clustername)
-	if not rc:
-		luci_log.debug_verbose('addResource1: unable to find a ricci agent for cluster %s' % clustername)
-		return 'Unable to find a ricci agent for the %s cluster' % clustername
-
-	try:
-		model.getResourcesPtr().addChild(res)
-	except Exception, e:
-		luci_log.debug_verbose('addResource2: adding the new resource failed: %s' % str(e))
-		return 'Unable to add the new resource'
-
-	try:
-		model.setModified(True)
-		conf = model.exportModelAsString()
-		if not conf:
-			raise Exception, 'model string for %s is blank' % clustername
-	except Exception, e:
-		luci_log.debug_verbose('addResource3: exportModelAsString : %s' \
-			% str(e))
-		return 'An error occurred while adding this resource'
-
-	try:
-		ragent = rc.hostname()
-		if not ragent:
-			luci_log.debug_verbose('addResource4: missing ricci hostname')
-			raise Exception, 'unknown ricci agent hostname'
-
-		batch_number, result = setClusterConf(rc, str(conf))
-		if batch_number is None or result is None:
-			luci_log.debug_verbose('addResource5: missing batch_number or result')
-			raise Exception, 'unable to save the new cluster configuration.'
-	except Exception, e:
-		luci_log.debug_verbose('addResource6: %s' % str(e))
-		return 'An error occurred while propagating the new cluster.conf: %s' % str(e)
-
-	if res_type != 'ip':
-		res_name = res.attr_hash['name']
-	else:
-		res_name = res.attr_hash['address']
-
-	try:
-		try:
-			if request.form.has_key('edit'):
-				action_type = RESOURCE_CONFIG
-				action_str = 'Configuring resource \"%s\"' % res_name
-			else:
-				raise Exception, 'new'
-		except Exception, e:
-			action_type = RESOURCE_ADD
-			action_str = 'Creating new resource \"%s\"' % res_name
-
-		set_node_flag(self, clustername, ragent, str(batch_number), action_type, action_str)
-	except Exception, e:
-		luci_log.debug_verbose('addResource7: failed to set flags: %s' % str(e))
-
-	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + RESOURCES + "&clustername=" + clustername + '&busyfirst=true')
-
-def getResource(model, name):
-	resPtr = model.getResourcesPtr()
-	resources = resPtr.getChildren()
-
-	for res in resources:
-		if res.getName() == name:
-			return res
-
-	luci_log.debug_verbose('getResource: unable to find resource \"%s\"' % name)
-	raise KeyError, name
-
-def getResourceForEdit(model, name):
-	resPtr = model.getResourcesPtr()
-	resources = resPtr.getChildren()
-
-	for res in resources:
-		if res.getName() == name:
-			resPtr.removeChild(res)
-			return res
-
-	luci_log.debug_verbose('GRFE0: unable to find resource \"%s\"' % name)
-	raise KeyError, name
+	luci_log.debug_verbose('getResource: unable to find resource "%s"' % name)
+	raise KeyError, name
 
 def appendModel(request, model):
 	try:
@@ -7443,76 +6104,9 @@
 		luci_log.debug_verbose('Appending model to request failed')
 		return 'An error occurred while storing the cluster model.'
 
-def resolve_nodename(self, clustername, nodename):
-	path = str(CLUSTER_FOLDER_PATH + clustername)
-
-	try:
-		clusterfolder = self.restrictedTraverse(path)
-		objs = clusterfolder.objectItems('Folder')
-	except Exception, e:
-		luci_log.debug_verbose('RNN0: error for %s/%s: %s' \
-			% (nodename, clustername, str(e)))
-		return nodename
-
-	for obj in objs:
-		try:
-			if obj[0].find(nodename) != (-1):
-				return obj[0]
-		except:
-			continue
-
-	luci_log.debug_verbose('RNN1: failed for %s/%s: nothing found' \
-		% (nodename, clustername))
-	return nodename
-
-def noNodeFlagsPresent(self, nodefolder, flagname, hostname):
-	try:
-		items = nodefolder.objectItems('ManagedSystem')
-	except:
-		luci_log.debug('NNFP0: error getting flags for %s' % nodefolder[0])
-		return None
-
-	for item in items:
-		if item[0] != flagname:
-			continue
-
-		#a flag already exists... try to delete it
-		try:
-			# hostname must be a FQDN
-			rc = RicciCommunicator(hostname)
-		except Exception, e:
-			luci_log.info('NNFP1: ricci error %s: %s' % (hostname, str(e)))
-			return None
-
-		if not rc.authed():
-			try:
-				snode = getStorageNode(self, hostname)
-				setNodeFlag(snode, CLUSTER_NODE_NEED_AUTH)
-			except:
-				pass
-			luci_log.info('NNFP2: %s not authenticated' % item[0])
-
-		batch_ret = checkBatch(rc, item[1].getProperty(BATCH_ID))
-		finished = batch_ret[0]
-		if finished == True or finished == -1:
-			if finished == -1:
-				luci_log.debug_verbose('NNFP2: batch error: %s' % batch_ret[1])
-			try:
-				nodefolder.manage_delObjects([item[0]])
-			except Exception, e:
-				luci_log.info('NNFP3: manage_delObjects for %s failed: %s' \
-					% (item[0], str(e)))
-				return None
-			return True
-		else:
-			#Not finished, so cannot remove flag
-			return False
-
-	return True
-
 def getModelBuilder(self, rc, isVirtualized):
 	try:
-		cluster_conf_node = getClusterConf(rc)
+		cluster_conf_node = rq.getClusterConf(rc)
 		if not cluster_conf_node:
 			raise Exception, 'getClusterConf returned None'
 	except Exception, e:
@@ -7525,7 +6119,7 @@
 			raise Exception, 'ModelBuilder returned None'
 	except Exception, e:
 		try:
-			luci_log.debug_verbose('GMB1: An error occurred while trying to get model for conf \"%s\": %s' % (cluster_conf_node.toxml(), str(e)))
+			luci_log.debug_verbose('GMB1: An error occurred while trying to get model for conf "%s": %s' % (cluster_conf_node.toxml(), str(e)))
 		except:
 			luci_log.debug_verbose('GMB1: ModelBuilder failed')
 
@@ -7551,87 +6145,57 @@
 
 	return model
 
-def set_node_flag(self, cluname, agent, batchid, task, desc):
-	path = str(CLUSTER_FOLDER_PATH + cluname)
-	batch_id = str(batchid)
-	objname = str(agent + '____flag')
-
-	objpath = ''
-	try:
-		clusterfolder = self.restrictedTraverse(path)
-		clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-		objpath = str(path + '/' + objname)
-		flag = self.restrictedTraverse(objpath)
-		flag.manage_addProperty(BATCH_ID, batch_id, 'string')
-		flag.manage_addProperty(TASKTYPE, task, 'string')
-		flag.manage_addProperty(FLAG_DESC, desc, 'string')
-	except Exception, e:
-		errmsg = 'SNF0: error creating flag (%s,%s,%s) at %s: %s' \
-					% (batch_id, task, desc, objpath, str(e))
-		luci_log.debug_verbose(errmsg)
-		raise Exception, errmsg
-
-
-
-
-
-
-
-
-
-
-
 def process_cluster_conf_editor(self, req):
 	clustername = req['clustername']
-	msg = '\n'
+	msg_list = list(('\n'))
 	cc = ''
 	if 'new_cluster_conf' in req:
 		cc = req['new_cluster_conf']
-		msg += 'Checking if valid XML - '
+		msg_list.append('Checking if valid XML - ')
 		cc_xml = None
 		try:
 			cc_xml = minidom.parseString(cc)
 		except:
 			pass
 		if cc_xml == None:
-			msg += 'FAILED\n'
-			msg += 'Fix the error and try again:\n'
+			msg_list.append('FAILED\n')
+			msg_list.append('Fix the error and try again:\n')
 		else:
-			msg += 'PASSED\n'
+			msg_list.append('PASSED\n')
 
-			msg += 'Making sure no clustername change has accured - '
+			msg_list.append('Making sure no cluster name change has occurred - ')
 			new_name = cc_xml.firstChild.getAttribute('name')
 			if new_name != clustername:
-				msg += 'FAILED\n'
-				msg += 'Fix the error and try again:\n'
+				msg_list.append('FAILED\n')
+				msg_list.append('Fix the error and try again:\n')
 			else:
-				msg += 'PASSED\n'
+				msg_list.append('PASSED\n')
 
-				msg += 'Increasing cluster version number - '
+				msg_list.append('Incrementing the cluster version number - ')
 				version = cc_xml.firstChild.getAttribute('config_version')
 				version = int(version) + 1
 				cc_xml.firstChild.setAttribute('config_version', str(version))
-				msg += 'DONE\n'
+				msg_list.append('DONE\n')
 
-				msg += 'Propagating new cluster.conf'
+				msg_list.append('Propagating the new cluster.conf')
 				rc = getRicciAgent(self, clustername)
 				if not rc:
 					luci_log.debug_verbose('VFA: unable to find a ricci agent for the %s cluster' % clustername)
-					msg += '\nUnable to contact a ricci agent for cluster ' + clustername + '\n\n'
+					msg_list.append('\nUnable to contact a ricci agent for cluster "%s"\n\n' % clustername)
 				else:
-					batch_id, result = setClusterConf(rc, cc_xml.toxml())
+					batch_id, result = rq.setClusterConf(rc, cc_xml.toxml())
 					if batch_id is None or result is None:
 						luci_log.debug_verbose('VFA: setClusterConf: batchid or result is None')
-						msg += '\nUnable to propagate the new cluster configuration for ' + clustername + '\n\n'
+						msg_list.append('\nUnable to propagate the new cluster configuration for cluster "%s"\n\n' % clustername)
 					else:
-						msg += ' - DONE\n'
+						msg_list.append(' - DONE\n')
 						cc = cc_xml.toxml()
-						msg += '\n\nALL DONE\n\n'
+						msg_list.append('\n\nALL DONE\n\n')
 	else:
 		if getClusterInfo(self, None, req) == {}:
-			msg = 'invalid cluster'
+			msg_list.append('invalid cluster')
 		else:
 			model = req.SESSION.get('model')
 			cc = model.exportModelAsString()
-	return {'msg'              : msg,
-		'cluster_conf'     : cc}
+
+	return {'msg': ''.join(msg_list), 'cluster_conf': cc}
--- conga/luci/site/luci/Extensions/conga_constants.py	2007/03/15 16:50:33	1.39
+++ conga/luci/site/luci/Extensions/conga_constants.py	2007/05/03 20:16:38	1.39.2.1
@@ -1,158 +1,146 @@
-#PAGE_TYPEs
-CLUSTERLIST="3"
-CLUSTERS="4"
-CLUSTER="5"
-CLUSTER_ADD="6"
-CLUSTER_CONFIG="7"
-CLUSTER_PROCESS="8"
-NODE="9"
-NODES="10"
-NODE_LIST="11"
-NODE_GRID="12"
-NODE_CONFIG="14"
-NODE_ADD="15"
-NODE_PROCESS="16"
-NODE_LOGS="17"
-VM_ADD="18"
-VM_CONFIG="19"
-SERVICES="20"
-SERVICE_ADD="21"
-SERVICE_LIST="22"
-SERVICE_CONFIG="23"
-SERVICE="24"
-SERVICE_PROCESS="25"
-SERVICE_START="26"
-SERVICE_STOP="27"
-SERVICE_RESTART="28"
-VM_PROCESS="29"
-RESOURCES="30"
-RESOURCE_ADD="31"
-RESOURCE_LIST="32"
-RESOURCE_CONFIG="33"
-RESOURCE="34"
-RESOURCE_PROCESS="35"
-RESOURCE_REMOVE="36"
-FDOMS="40"
-FDOM_ADD="41"
-FDOM_LIST="42"
-FDOM_CONFIG="43"
-FDOM="44"
-FENCEDEVS="50"
-FENCEDEV_ADD="51"
-FENCEDEV_LIST="52"
-FENCEDEV_CONFIG="53"
-FENCEDEV="54"
-CLUSTER_DAEMON="55"
-SERVICE_DELETE = '56'
-FENCEDEV_DELETE = '57'
-FENCEDEV_NODE_CONFIG = '58'
-SERVICE_MIGRATE = '59'
-
-CONF_EDITOR = '80'
-SYS_SERVICE_MANAGE = '90'
-SYS_SERVICE_UPDATE = '91'
-
-#Cluster tasks
-CLUSTER_STOP = '1000'
-CLUSTER_START = '1001'
-CLUSTER_RESTART = '1002'
-CLUSTER_DELETE = '1003'
-
-#General tasks
-NODE_LEAVE_CLUSTER="100"
-NODE_JOIN_CLUSTER="101"
-NODE_REBOOT="102"
-NODE_FENCE="103"
-NODE_DELETE="104"
-
-BASECLUSTER="201"
-FENCEDAEMON="202"
-MULTICAST="203"
-QUORUMD="204"
+# Cluster area page types
+CLUSTERLIST				= '3'
+CLUSTERS				= '4'
+CLUSTER					= '5'
+CLUSTER_ADD				= '6'
+CLUSTER_CONFIG			= '7'
+CLUSTER_PROCESS			= '8'
+NODE					= '9'
+NODES					= '10'
+NODE_LIST				= '11'
+NODE_GRID				= '12'
+NODE_CONFIG				= '14'
+NODE_ADD				= '15'
+NODE_PROCESS			= '16'
+NODE_LOGS				= '17'
+VM_ADD					= '18'
+VM_CONFIG				= '19'
+SERVICES				= '20'
+SERVICE_ADD				= '21'
+SERVICE_LIST			= '22'
+SERVICE_CONFIG			= '23'
+SERVICE					= '24'
+SERVICE_PROCESS			= '25'
+SERVICE_START			= '26'
+SERVICE_STOP			= '27'
+SERVICE_RESTART			= '28'
+VM_PROCESS				= '29'
+RESOURCES				= '30'
+RESOURCE_ADD			= '31'
+RESOURCE_LIST			= '32'
+RESOURCE_CONFIG			= '33'
+RESOURCE				= '34'
+RESOURCE_PROCESS		= '35'
+RESOURCE_REMOVE			= '36'
+FDOMS					= '40'
+FDOM_ADD				= '41'
+FDOM_LIST				= '42'
+FDOM_CONFIG				= '43'
+FDOM					= '44'
+FENCEDEVS				= '50'
+FENCEDEV_ADD			= '51'
+FENCEDEV_LIST			= '52'
+FENCEDEV_CONFIG			= '53'
+FENCEDEV				= '54'
+CLUSTER_DAEMON			= '55'
+SERVICE_DELETE			= '56'
+FENCEDEV_DELETE			= '57'
+FENCEDEV_NODE_CONFIG	= '58'
+SERVICE_MIGRATE			= '59'
+CONF_EDITOR				= '80'
+SYS_SERVICE_MANAGE		= '90'
+SYS_SERVICE_UPDATE		= '91'
+
+# Cluster tasks
+CLUSTER_STOP	= '1000'
+CLUSTER_START	= '1001'
+CLUSTER_RESTART	= '1002'
+CLUSTER_DELETE	= '1003'
+
+# Node tasks
+NODE_LEAVE_CLUSTER	= '100'
+NODE_JOIN_CLUSTER	= '101'
+NODE_REBOOT			= '102'
+NODE_FENCE			= '103'
+NODE_DELETE			= '104'
+
+# General tasks
+BASECLUSTER	= '201'
+FENCEDAEMON	= '202'
+MULTICAST	= '203'
+QUORUMD		= '204'
 
 PROPERTIES_TAB = 'tab'
 
-PROP_GENERAL_TAB = '1'
-PROP_FENCE_TAB = '2'
-PROP_MCAST_TAB = '3'
-PROP_QDISK_TAB = '4'
-PROP_GULM_TAB = '5'
-
-PAGETYPE="pagetype"
-ACTIONTYPE="actiontype"
-TASKTYPE="tasktype"
-CLUNAME="clustername"
-BATCH_ID="batch_id"
-FLAG_DESC="flag_desc"
-LAST_STATUS="last_status"
+PROP_GENERAL_TAB	= '1'
+PROP_FENCE_TAB		= '2'
+PROP_MCAST_TAB		= '3'
+PROP_QDISK_TAB		= '4'
+PROP_GULM_TAB		= '5'
+
+PAGETYPE	= 'pagetype'
+ACTIONTYPE	= 'actiontype'
+TASKTYPE	= 'tasktype'
+CLUNAME		= 'clustername'
+BATCH_ID	= 'batch_id'
+FLAG_DESC	= 'flag_desc'
+LAST_STATUS	= 'last_status'
 
-PATH_TO_PRIVKEY="/var/lib/luci/var/certs/privkey.pem"
-PATH_TO_CACERT="/var/lib/luci/var/certs/cacert.pem"
+PATH_TO_PRIVKEY	= '/var/lib/luci/var/certs/privkey.pem'
+PATH_TO_CACERT	= '/var/lib/luci/var/certs/cacert.pem'
 
 # Zope DB paths
+PLONE_ROOT = 'luci'
 CLUSTER_FOLDER_PATH = '/luci/systems/cluster/'
 STORAGE_FOLDER_PATH = '/luci/systems/storage/'
 
-#Node states
-NODE_ACTIVE="0"
-NODE_INACTIVE="1"
-NODE_UNKNOWN="2"
-NODE_ACTIVE_STR="Cluster Member"
-NODE_INACTIVE_STR="Not a Cluster Member"
-NODE_UNKNOWN_STR="Unknown State"
-
-FD_VAL_FAIL = 1
-FD_VAL_SUCCESS = 0
-
-#cluster/node create batch task index
-INSTALL_TASK = 1
-DISABLE_SVC_TASK = 2
-REBOOT_TASK = 3
-SEND_CONF = 4
-ENABLE_SVC_TASK = 5
-START_NODE = 6
-RICCI_CONNECT_FAILURE = (-1000)
+# Node states
+NODE_ACTIVE		= '0'
+NODE_INACTIVE	= '1'
+NODE_UNKNOWN	= '2'
+
+NODE_ACTIVE_STR		= 'Cluster Member'
+NODE_INACTIVE_STR	= 'Not a Cluster Member'
+NODE_UNKNOWN_STR	= 'Unknown State'
+
+# cluster/node create batch task index
+INSTALL_TASK			= 1
+DISABLE_SVC_TASK		= 2
+REBOOT_TASK				= 3
+SEND_CONF				= 4
+ENABLE_SVC_TASK			= 5
+START_NODE				= 6
+RICCI_CONNECT_FAILURE	= (-1000)
+
+RICCI_CONNECT_FAILURE_MSG = 'A problem was encountered connecting with this node.  '
 
-RICCI_CONNECT_FAILURE_MSG = "A problem was encountered connecting with this node.  "
-#cluster/node create error messages
+# cluster/node create error messages
 CLUNODE_CREATE_ERRORS = [
-	"An unknown error occurred when creating this node: ",
-	"A problem occurred when installing packages: ",
-	"A problem occurred when disabling cluster services on this node: ",
-	"A problem occurred when rebooting this node: ",
-	"A problem occurred when propagating the configuration to this node: ",
-	"A problem occurred when enabling cluster services on this node: ",
-	"A problem occurred when starting this node: "
+	'An unknown error occurred when creating this node: %s',
+	'A problem occurred when installing packages: %s',
+	'A problem occurred when disabling cluster services on this node: %s',
+	'A problem occurred when rebooting this node: %s',
+	'A problem occurred when propagating the configuration to this node: %s',
+	'A problem occurred when enabling cluster services on this node: %s',
+	'A problem occurred when starting this node: %s'
 ]
 
-#cluster/node create error status messages
-PRE_INSTALL = "The install state is not yet complete"
-PRE_REBOOT = "Installation complete, but reboot not yet complete"
-PRE_CFG = "Reboot stage successful, but configuration for the cluster is not yet distributed"
-PRE_JOIN = "Packages are installed and configuration has been distributed, but the node has not yet joined the cluster."
-
-
-POSSIBLE_REBOOT_MESSAGE = "This node is not currently responding and is probably rebooting as planned. This state should persist for 5 minutes or so..."
-
-REDIRECT_MSG = " You will be redirected in 5 seconds."
-
-
-# Homebase-specific constants
-HOMEBASE_ADD_USER = "1"
-HOMEBASE_ADD_SYSTEM = "2"
-HOMEBASE_PERMS = "3"
-HOMEBASE_DEL_USER = "4"
-HOMEBASE_DEL_SYSTEM = "5"
-HOMEBASE_ADD_CLUSTER = "6"
-HOMEBASE_ADD_CLUSTER_INITIAL = "7"
-HOMEBASE_AUTH = "8"
+# cluster/node create error status messages
+PRE_INSTALL = 'The install state is not yet complete.'
+PRE_REBOOT	= 'Installation complete, but reboot not yet complete.'
+PRE_CFG		= 'Reboot stage successful, but configuration for the cluster is not yet distributed.'
+PRE_JOIN	= 'Packages are installed and configuration has been distributed, but the node has not yet joined the cluster.'
 
-# Cluster node exception attribute flags
-CLUSTER_NODE_NEED_AUTH = 0x01
-CLUSTER_NODE_NOT_MEMBER = 0x02
-CLUSTER_NODE_ADDED = 0x04
+POSSIBLE_REBOOT_MESSAGE = 'This node is not currently responding and is probably rebooting as planned. This state should persist for 5 minutes or so...'
 
-PLONE_ROOT = 'luci'
+REDIRECT_MSG = ' -- You will be redirected in 5 seconds.'
 
-LUCI_DEBUG_MODE = 0
-LUCI_DEBUG_VERBOSITY = 0
+# Cluster node exception attribute flags
+CLUSTER_NODE_NEED_AUTH	= 0x01
+CLUSTER_NODE_NOT_MEMBER	= 0x02
+CLUSTER_NODE_ADDED		= 0x04
+
+# Debugging parameters. Set LUCI_DEBUG_MODE to 1 and LUCI_DEBUG_VERBOSITY
+# to >= 2 to get full debugging output in syslog (LOG_DAEMON/LOG_DEBUG).
+LUCI_DEBUG_MODE			= 0
+LUCI_DEBUG_VERBOSITY	= 0
--- conga/luci/site/luci/Extensions/conga_storage_constants.py	2006/10/15 22:34:54	1.8
+++ conga/luci/site/luci/Extensions/conga_storage_constants.py	2007/05/03 20:16:38	1.8.8.1
@@ -1,65 +1,24 @@
-
-from ricci_defines import *
-
+from ricci_defines import MAPPER_ATARAID_TYPE, MAPPER_CRYPTO_TYPE, MAPPER_iSCSI_TYPE, MAPPER_MDRAID_TYPE, MAPPER_MULTIPATH_TYPE, MAPPER_PT_TYPE, MAPPER_SYS_TYPE, MAPPER_VG_TYPE
 
 ## request vars ##
 
-PAGETYPE="pagetype"
-CLUNAME="clustername"
-STONAME='storagename'
-
-
-
-## pagetypes ##
-
-# CLUSTER PAGE_TYPEs #
-CLUSTERS="4"
-CLUSTER="5"
-CLUSTER_ADD="6"
-CLUSTER_CONFIG="7"
-NODE="9"
-NODES="10"
-NODE_LIST="11"
-NODE_GRID="12"
-NODE_CONFIG="14"
-NODE_ADD="15"
-NODE_PROCESS="16"
-SERVICES="20"
-SERVICE_ADD="21"
-SERVICE_LIST="22"
-SERVICE_CONFIG="23"
-SERVICE="24"
-SERVICE_PROCESS="25"
-RESOURCES="30"
-RESOURCE_ADD="31"
-RESOURCE_LIST="32"
-RESOURCE_CONFIG="33"
-RESOURCE="34"
-RESOURCE_PROCESS="35"
-FDOMS="40"
-FDOM_ADD="41"
-FDOM_LIST="42"
-FDOM_CONFIG="43"
-FDOM="44"
-FENCEDEVS="50"
-FENCEDEV_ADD="51"
-FENCEDEV_LIST="52"
-FENCEDEV_CONFIG="53"
-FENCEDEV="54"
+PAGETYPE = "pagetype"
+CLUNAME = "clustername"
+STONAME = 'storagename'
 
 
 # storage pagetypes #
 
-PT_MAPPER_ID='mapper_id'
-PT_MAPPER_TYPE='mapper_type'
-PT_PATH='bd_path'
-
-STORAGESYS="0"
-STORAGE_CONFIG="43"
-STORAGE="44"
-CLUSTER_STORAGE="45"
+PT_MAPPER_ID = 'mapper_id'
+PT_MAPPER_TYPE = 'mapper_type'
+PT_PATH = 'bd_path'
+
+STORAGESYS = "0"
+STORAGE_CONFIG = "43"
+STORAGE = "44"
+CLUSTER_STORAGE = "45"
 
-STORAGE_COMMIT_CHANGES='commit_changes'
+STORAGE_COMMIT_CHANGES = 'commit_changes'
 
 
 VIEW_MAPPERS = '51'
@@ -84,6 +43,7 @@
                       MAPPER_MULTIPATH_TYPE   : ('Multipath',       'Multipath',      'Path'),
                       MAPPER_CRYPTO_TYPE      : ('Encryption',      'Volume',         'Device'),
                       MAPPER_iSCSI_TYPE       : ('iSCSI',           'Volume',         'BUG: source not defined')}
+
 def get_pretty_mapper_info(mapper_type):
     try:
         return PRETTY_MAPPER_INFO[mapper_type]
@@ -148,6 +108,7 @@
                      'uuid'                    : "UUID",
                      'vendor'                  : "Vendor",
                      'vgname'                  : "Volume Group Name"}
+
 def get_pretty_prop_name(name):
     try:
         return PRETTY_PROP_NAMES[name]
@@ -181,6 +142,7 @@
                    'ocfs2'    : "Oracle Clustered FS v.2",
                    'relayfs'  : "Relay FS",
                    'udf'      : "Universal Disk Format"}
+
 def get_pretty_fs_name(name):
     try:
         return PRETTY_FS_NAMES[name]
@@ -200,6 +162,7 @@
                 MAPPER_MULTIPATH_TYPE   : ('icon_mapper_multipath.png', 'icon_bd_multipath.png', ''),
                 MAPPER_CRYPTO_TYPE      : ('icon_mapper_crypto.png',    'icon_bd_crypto.png',    ''),
                 MAPPER_iSCSI_TYPE       : ('',                          'icon_bd_net.png',       '')}
+
 def get_mapper_icons(mapper_type):
     try:
         return MAPPER_ICONS[mapper_type]
--- conga/luci/site/luci/Extensions/homebase_adapters.py	2007/02/12 20:24:28	1.50
+++ conga/luci/site/luci/Extensions/homebase_adapters.py	2007/05/03 20:16:38	1.50.2.1
@@ -1,34 +1,25 @@
-import re
-import os
-from AccessControl import getSecurityManager
-import cgi
-
 from conga_constants import PLONE_ROOT, CLUSTER_NODE_NEED_AUTH, \
-							HOMEBASE_ADD_CLUSTER, HOMEBASE_ADD_CLUSTER_INITIAL, \
-							HOMEBASE_ADD_SYSTEM, HOMEBASE_ADD_USER, \
-							HOMEBASE_DEL_SYSTEM, HOMEBASE_DEL_USER, HOMEBASE_PERMS, \
-							STORAGE_FOLDER_PATH, CLUSTER_FOLDER_PATH
-
-from ricci_bridge import getClusterConf
-from ricci_communicator import RicciCommunicator, CERTS_DIR_PATH
-from clusterOS import resolveOSType
+	STORAGE_FOLDER_PATH, CLUSTER_FOLDER_PATH
+
+from RicciQueries import getClusterConf
 from LuciSyslog import LuciSyslog
+from HelperFunctions import resolveOSType
+
+# Homebase area page types
+HOMEBASE_ADD_USER				= '1'
+HOMEBASE_ADD_SYSTEM				= '2'
+HOMEBASE_PERMS					= '3'
+HOMEBASE_DEL_USER				= '4'
+HOMEBASE_DEL_SYSTEM				= '5'
+HOMEBASE_ADD_CLUSTER			= '6'
+HOMEBASE_ADD_CLUSTER_INITIAL	= '7'
+HOMEBASE_AUTH					= '8'
 
 try:
 	luci_log = LuciSyslog()
 except:
 	pass
 
-def siteIsSetup(self):
-	try:
-		if os.path.isfile(CERTS_DIR_PATH + 'privkey.pem') and os.path.isfile(CERTS_DIR_PATH + 'cacert.pem'):
-			return True
-	except: pass
-	return False
-
-def strFilter(regex, replaceChar, arg):
-	return re.sub(regex, replaceChar, arg)
-
 def validateDelSystem(self, request):
 	errors = list()
 	messages = list()
@@ -42,7 +33,7 @@
 			if dsResult:
 				errors.append(dsResult)
 			else:
-				messages.append('Removed storage system \"%s\" successfully' % i)
+				messages.append('Removed storage system "%s" successfully' % i)
 
 	if '__CLUSTER' in request.form:
 		cluNames = request.form['__CLUSTER']
@@ -53,7 +44,7 @@
 			if dcResult:
 				errors.append(dcResult)
 			else:
-				messages.append('Removed cluster \"%s\" successfully' % i)
+				messages.append('Removed cluster "%s" successfully' % i)
 
 	if len(errors) > 0:
 		retCode = False
@@ -76,27 +67,27 @@
 		if not user:
 			raise Exception, 'user %s does not exist' % userId
 	except:
-		return (False, {'errors': [ 'No such user: \"' + userId + '\"' ] })
+		return (False, {'errors': [ 'No such user: "%s"' % userId ] })
 
 	for i in getClusters(self):
 		try:
 			i[1].manage_delLocalRoles([userId])
 		except:
-			errors.append('Error deleting roles from cluster \"' + i[0] + '\" for user \"' + userId + '\"')
+			errors.append('Error deleting roles from cluster "%s" for user "%s"' % (i[0], userId))
 
 	for i in getStorage(self):
 		try:
 			i[1].manage_delLocalRoles([userId])
 		except:
-			errors.append('Error deleting roles from storage system \"' + i[0] + '\" for user \"' + userId + '\"')
+			errors.append('Error deleting roles from storage system "%s" for user "%s"' % (i[0], userId))
 
 	try:
 		self.acl_users.userFolderDelUsers([userId])
 	except:
-		errors.append('Unable to delete user \"' + userId + '\"')
+		errors.append('Unable to delete user "%s"' % userId)
 		return (False, {'errors': errors })
 
-	messages.append('User \"' + userId + '\" has been deleted')
+	messages.append('User "%s" has been deleted' % userId)
 	return (True, {'errors': errors, 'messages': messages })
 
 def validateAddUser(self, request):
@@ -112,7 +103,7 @@
 	user = request.form['newUserName']
 
 	if self.portal_membership.getMemberById(user):
-		return (False, {'errors': ['The user \"' + user + '\" already exists']})
+		return (False, {'errors': ['The user "%s" already exists' % user ]})
 
 	passwd = request.form['newPassword']
 	pwconfirm = request.form['newPasswordConfirm']
@@ -121,14 +112,14 @@
 		return (False, {'errors': ['The passwords do not match']})
 
 	try:
-		self.portal_registration.addMember(user, passwd, properties = { 'username': user, 'password': passwd, 'confirm': passwd, 'roles': ['Member'], 'domains':[], 'email': user + '@example.com' })
+		self.portal_registration.addMember(user, passwd, properties = { 'username': user, 'password': passwd, 'confirm': passwd, 'roles': ['Member'], 'domains':[], 'email': '%s at example.com' % user })
 	except:
-		return (False, {'errors': [ 'Unable to add new user \"' + user + '\"' ] })
+		return (False, {'errors': [ 'Unable to add new user "%s"' % user ] })
 
 	if not self.portal_membership.getMemberById(user):
-		return (False, {'errors': [ 'Unable to add new user \"' + user + '\"'] })
+		return (False, {'errors': [ 'Unable to add new user "%s"' % user ] })
 
-	messages.append('Added new user \"' + user + '\" successfully')
+	messages.append('Added new user "%s" successfully' % user)
 	return (True, {'messages': messages, 'params': { 'user': user }})
 
 def validateAddClusterInitial(self, request):
@@ -206,13 +197,13 @@
 	if not check_certs or cur_host_trusted:
 		try:
 			if cur_host_fp is not None and cur_host_fp != cur_fp[1]:
-				errmsg = 'The key fingerprint for %s has changed from under us. It was \"%s\" and is now \"%s\".' \
+				errmsg = 'The key fingerprint for %s has changed from under us. It was "%s" and is now "%s."' \
 					% (cur_host, cur_host_fp, cur_fp[1])
 				request.SESSION.set('add_cluster_initial', cur_entry)
 				luci_log.info('SECURITY: %s' % errmsg)
 				return (False, { 'errors': [ errmsg ] })
 			if trust_shown is True and cur_host_trusted is False:
-				errmsg = 'You must elect to trust \"%s\" or abort the addition of the cluster to Luci.' % cur_host
+				errmsg = 'You must elect to trust "%s" or abort the addition of the cluster to Luci.' % cur_host
 				request.SESSION.set('add_cluster_initial', cur_entry)
 				return (False, { 'errors': [ errmsg ] })
 			rc.trust()
@@ -259,7 +250,7 @@
 			errmsg = 'Unable to authenticate to the ricci agent on %s: %s' % (cur_host, str(e))
 			luci_log.debug_verbose('vACI5: %s: %s' % (cur_host, str(e)))
 			request.SESSION.set('add_cluster_initial', cur_entry)
-			return (False, { 'errors': [ 'Unable to authenticate to the ricci agent on \"%s\"' % cur_host ] })
+			return (False, { 'errors': [ 'Unable to authenticate to the ricci agent on "%s"' % cur_host ] })
 
 	del cur_entry
 
@@ -276,9 +267,9 @@
 				pass
 
 		if not cluster_info:
-			errmsg = 'An error occurred while attempting to retrieve the cluster.conf file from \"%s\"' % cur_host
+			errmsg = 'An error occurred while attempting to retrieve the cluster.conf file from "%s"' % cur_host
 		else:
-			errmsg = '\"%s\" reports is not a member of any cluster.' % cur_host
+			errmsg = '"%s" reports is not a member of any cluster.' % cur_host
 		return (False, { 'errors': [ errmsg ] })
 
 	cluster_name = cluster_info[0]
@@ -301,10 +292,10 @@
 	# Make sure a cluster with this name is not already managed before
 	# going any further.
 	try:
-		dummy = self.restrictedTraverse(CLUSTER_FOLDER_PATH + cluster_name)
+		dummy = self.restrictedTraverse('%s%s' % (CLUSTER_FOLDER_PATH, cluster_name))
 		if not dummy:
 			raise Exception, 'no existing cluster'
-		errors.append('A cluster named \"%s\" is already managed.')
+		errors.append('A cluster named "%s" is already managed.')
 		if not prev_auth:
 			try:
 				rc.unauth()
@@ -320,7 +311,7 @@
 				rc.unauth()
 			except:
 				pass
-		return (False, { 'errors': [ 'Error retrieving the nodes list for cluster \"%s\" from node \"%s\"' % (cluster_name, cur_host) ] })
+		return (False, { 'errors': [ 'Error retrieving the nodes list for cluster "%s" from node "%s"' % (cluster_name, cur_host) ] })
 
 	same_node_passwds = False
 	try:
@@ -369,7 +360,7 @@
 				raise Exception, 'no hostname'
 			cur_host = sysData[0]
 			if cur_host in system_list:
-				errors.append('You have added \"%s\" more than once.' % cur_host)
+				errors.append('You have added "%s" more than once.' % cur_host)
 				raise Exception, '%s added more than once' % cur_host
 		except:
 			i += 1
@@ -408,7 +399,7 @@
 				if cur_set_trust is True and cur_fp is not None:
 					cur_system['fp'] = cur_fp
 					if cur_fp != fp[1]:
-						errmsg = '1The key fingerprint for %s has changed from under us. It was \"%s\" and is now \"%s\".' % (cur_host, cur_fp, fp[1])
+						errmsg = 'The key fingerprint for %s has changed from under us. It was "%s" and is now "%s."' % (cur_host, cur_fp, fp[1])
 						errors.append(errmsg)
 						luci_log.info('SECURITY: %s' % errmsg)
 						cur_system['error'] = True
@@ -446,7 +437,7 @@
 				if not rc.trusted() and (trust_shown is True and cur_set_trust is False):
 					incomplete = True
 					cur_system['error'] = True
-					errors.append('You must either trust \"%s\" or remove it.' % cur_host)
+					errors.append('You must either trust "%s" or remove it.' % cur_host)
 				else:
 					# The user doesn't care. Trust the system.
 					rc.trust()
@@ -563,7 +554,7 @@
 					cur_cluster_name = cluster_info[1]
 
 				if cur_cluster_name:
-					err_msg = 'Node %s reports it is in cluster \"%s\" and we expect \"%s\"' \
+					err_msg = 'Node %s reports it is in cluster "%s" and we expect "%s"' \
 						% (cur_host, cur_cluster_name % cluster_name)
 				else:
 					err_msg = 'Node %s reports it is not a member of any cluster' % cur_host
@@ -580,7 +571,7 @@
 
 			cur_os = resolveOSType(rc.os())
 			if cur_os != cluster_os:
-				luci_log.debug_verbose('VAC5a: \"%s\" / \"%s\" -> \"%s\"' \
+				luci_log.debug_verbose('VAC5a: "%s" / "%s" -> "%s"' \
 					% (cluster_os, rc.os(), cur_os))
 				incomplete = True
 				cur_system['errors'] = True
@@ -657,7 +648,7 @@
 				errors.append(csResult)
 			else:
 				delete_keys.append(i)
-				messages.append('Added storage system \"%s\" successfully' \
+				messages.append('Added storage system "%s" successfully' \
 					% cur_host)
 
 	for i in delete_keys:
@@ -687,109 +678,118 @@
 	return (return_code, { 'errors': errors, 'messages': messages})
 
 def validatePerms(self, request):
-	userId = None
 	messages = list()
 	errors = list()
 
-	try:
-		userId = request.form['userList']
-	except:
-		return (False, {'errors': [ 'No user specified' ], 'params': { 'user': userId }})
+	username = None
+	if not request.form.has_key('userList'):
+		luci_log.debug_verbose('VP0: no user given')
+		errors.append('No user name was given.')
+	else:
+		username = request.form['userList'].strip()
 
-	user = self.portal_membership.getMemberById(userId)
-	if not user:
-		return (False, {'errors': [ 'Invalid user specified' ], 'params': { 'user': userId }})
+	user_id = None
+	if username is not None:
+		try:
+			user = self.portal_membership.getMemberById(username)
+			if not user:
+				raise Exception, 'no user'
+			user_id = user.getUserId()
+		except Exception, e:
+			luci_log.debug_verbose('VP1: no user "%s": %s' % (username, str(e)))
+			errors.append('An invalid user "%s" was given.' % username)
 
-	userId = user.getUserId()
+	if len(errors) > 0:
+		return (False, { 'errors': errors })
 
-	clusters = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/objectItems')('Folder')
-	if not '__CLUSTER' in request.form:
+	clusters = self.restrictedTraverse('%s/systems/cluster/objectItems' % PLONE_ROOT)('Folder')
+	if not request.form.has_key('__CLUSTER'):
 		for i in clusters:
 			try:
 				if user.has_role('View', i[1]):
-					roles = list(i[1].get_local_roles_for_userid(userId))
+					roles = list(i[1].get_local_roles_for_userid(user_id))
 					roles.remove('View')
 
 					if roles:
-						i[1].manage_setLocalRoles(userId, roles)
+						i[1].manage_setLocalRoles(user_id, roles)
 					else:
-						i[1].manage_delLocalRoles([userId])
-					messages.append('Removed permission for ' + userId + ' for cluster ' + i[0])
+						i[1].manage_delLocalRoles([ user_id ])
+					messages.append('Removed permission for user "%s" for cluster "%s"' % (user_id, i[0]))
 			except:
-				errors.append('Failed to remove permission for ' + userId + ' for cluster ' + i[0])
+				errors.append('Failed to remove permission for user "%s" for cluster "%s"' % (user_id, i[0]))
 	else:
 		for i in clusters:
 			if i[0] in request.form['__CLUSTER']:
 				try:
 					if not user.has_role('View', i[1]):
-						roles = list(i[1].get_local_roles_for_userid(userId))
+						roles = list(i[1].get_local_roles_for_userid(user_id))
 						roles.append('View')
-						i[1].manage_setLocalRoles(userId, roles)
-						messages.append('Added permission for ' + userId + ' for cluster ' + i[0])
+						i[1].manage_setLocalRoles(user_id, roles)
+						messages.append('Added permission for user "%s" for cluster "%s"' % (user_id, i[0]))
 				except:
-					errors.append('Failed to add permission for ' + userId + ' for cluster ' + i[0])
+					errors.append('Failed to add permission for user "%s" for cluster "%s"' % (user_id, i[0]))
 			else:
 				try:
 					if user.has_role('View', i[1]):
-						roles = list(i[1].get_local_roles_for_userid(userId))
+						roles = list(i[1].get_local_roles_for_userid(user_id))
 						roles.remove('View')
 
 						if roles:
-							i[1].manage_setLocalRoles(userId, roles)
+							i[1].manage_setLocalRoles(user_id, roles)
 						else:
-							i[1].manage_delLocalRoles([userId])
+							i[1].manage_delLocalRoles([ user_id ])
 
-						messages.append('Removed permission for ' + userId + ' for cluster ' + i[0])
+						messages.append('Removed permission for user "%s" for cluster "%s"' % (user_id, i[0]))
 				except:
-					errors.append('Failed to remove permission for ' + userId + ' for cluster ' + i[0])
+					errors.append('Failed to remove permission for user "%s" for cluster "%s"' % (user_id, i[0]))
 
-	storage = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/objectItems')('Folder')
-	if not '__SYSTEM' in request.form:
+	storage = self.restrictedTraverse('%s/systems/storage/objectItems' % PLONE_ROOT)('Folder')
+	if not request.form.has_key('__SYSTEM'):
 		for i in storage:
 			try:
 				if user.has_role('View', i[1]):
-					roles = list(i[1].get_local_roles_for_userid(userId))
+					roles = list(i[1].get_local_roles_for_userid(user_id))
 					roles.remove('View')
 
 					if roles:
-						i[1].manage_setLocalRoles(userId, roles)
+						i[1].manage_setLocalRoles(user_id, roles)
 					else:
-						i[1].manage_delLocalRoles([userId])
-					messages.append('Removed permission for ' + userId + ' for ' + i[0])
+						i[1].manage_delLocalRoles([ user_id ])
+					messages.append('Removed permission for user "%s" for system "%s"' % (user_id, i[0]))
 			except:
-				errors.append('Failed to remove permission for ' + userId + ' for ' + i[0])
+				errors.append('Failed to remove permission for user "%s" for system "%s"' % (user_id, i[0]))
 	else:
 		for i in storage:
 			if i[0] in request.form['__SYSTEM']:
 				try:
 					if not user.has_role('View', i[1]):
-						roles = list(i[1].get_local_roles_for_userid(userId))
+						roles = list(i[1].get_local_roles_for_userid(user_id))
 						roles.append('View')
-						i[1].manage_setLocalRoles(userId, roles)
-						messages.append('Added permission for ' + userId + ' for system ' + i[0])
+						i[1].manage_setLocalRoles(user_id, roles)
+						messages.append('Added permission for user "%s" for system "%s"' % (user_id, i[0]))
 				except:
-					errors.append('Failed to add permission for ' + userId + ' for system ' + i[0])
+					errors.append('Failed to add permission for user "%s" for system "%s"' % (user_id, i[0]))
 			else:
 				try:
 					if user.has_role('View', i[1]):
-						roles = list(i[1].get_local_roles_for_userid(userId))
+						roles = list(i[1].get_local_roles_for_userid(user_id))
 						roles.remove('View')
 
 						if roles:
-							i[1].manage_setLocalRoles(userId, roles)
+							i[1].manage_setLocalRoles(user_id, roles)
 						else:
-							i[1].manage_delLocalRoles([userId])
+							i[1].manage_delLocalRoles([ user_id ])
 
-						messages.append('Removed permission for ' + userId + ' for system ' + i[0])
+						messages.append('Removed permission for user "%s" for system "%s"' % (user_id, i[0]))
 				except:
-					errors.append('Failed to remove permission for ' + userId + ' for system ' + i[0])
+					errors.append('Failed to remove permission for user "%s" for system "%s"' % (user_id, i[0]))
 
 	if len(errors) > 0:
-		returnCode = False
+		ret = False
 	else:
-		returnCode = True
+		ret = True
 
-	return (returnCode, {'errors': errors, 'messages': messages, 'params': {'user': userId }})
+	return (ret , {'errors': errors, 'messages': messages, 'params': {'user': user_id }})
 
 def validateAuthenticate(self, request):
 	try:
@@ -861,17 +861,11 @@
 		except:
 			pass
 
-	if len(errors) > 0:
-		return_code = False
-	else:
-		return_code = True
-
 	if incomplete:
 		try:
 			request.SESSION.set('auth_systems', system_list)
 		except Exception, e:
 			luci_log.debug_verbose('validateAuthenticate2: %s' % str(e))
-		return_code = False
 	else:
 		try:
 			request.SESSION.delete('auth_systems')
@@ -897,28 +891,6 @@
 	validateAuthenticate
 ]
 
-def userAuthenticated(self):
-	try:
-		if (isAdmin(self) or getSecurityManager().getUser().has_role('Authenticated', self.restrictedTraverse(PLONE_ROOT))):
-			return True
-	except Exception, e:
-		luci_log.debug_verbose('UA0: %s' % str(e)) 
-	return False
-
-def isAdmin(self):
-	try:
-		return getSecurityManager().getUser().has_role('Owner', self.restrictedTraverse(PLONE_ROOT))
-	except Exception, e:
-		luci_log.debug_verbose('IA0: %s' % str(e)) 
-	return False
-
-def userIsAdmin(self, userId):
-	try:
-		return self.portal_membership.getMemberById(userId).has_role('Owner', self.restrictedTraverse(PLONE_ROOT))
-	except Exception, e:
-		luci_log.debug_verbose('UIA0: %s: %s' % (userId, str(e)))
-	return False
-
 def homebaseControlPost(self, request):
 	if 'ACTUAL_URL' in request:
 		url = request['ACTUAL_URL']
@@ -932,7 +904,7 @@
 			request.SESSION.set('checkRet', {})
 		except:
 			pass
-		return homebasePortal(self, request, '.', '0')
+		return homebasePortal(self, '.', '0')
 
 	try:
 		validatorFn = formValidators[pagetype - 1]
@@ -941,7 +913,7 @@
 			request.SESSION.set('checkRet', {})
 		except:
 			pass
-		return homebasePortal(self, request, '.', '0')
+		return homebasePortal(self, '.', '0')
 
 	ret = validatorFn(self, request)
 	params = None
@@ -951,7 +923,7 @@
 			params = ret[1]['params']
 		request.SESSION.set('checkRet', ret[1])
 
-	return homebasePortal(self, request, url, pagetype, params)
+	return homebasePortal(self, url, pagetype, params)
 
 def homebaseControl(self, request):
 	if request.REQUEST_METHOD == 'POST':
@@ -972,9 +944,9 @@
 	else:
 		pagetype = '0'
 
-	return homebasePortal(self, request, url, pagetype)
+	return homebasePortal(self, url, pagetype)
 
-def homebasePortal(self, request=None, url=None, pagetype=None, params=None):
+def homebasePortal(self, url=None, pagetype=None, params=None):
 	ret = {}
 	temp = list()
 	index = 0
@@ -990,7 +962,7 @@
 		if havePermAddStorage(self):
 			addSystem = {}
 			addSystem['Title'] = 'Add a System'
-			addSystem['absolute_url'] = url + '?pagetype=' + HOMEBASE_ADD_SYSTEM
+			addSystem['absolute_url'] = '%s?pagetype=%s' % (url, HOMEBASE_ADD_SYSTEM)
 			addSystem['Description'] = 'Add a system to the Luci storage management interface.'
 			if pagetype == HOMEBASE_ADD_SYSTEM:
 				cur = addSystem
@@ -1007,7 +979,7 @@
 		if havePermAddCluster(self):
 			addCluster = {}
 			addCluster['Title'] = 'Add an Existing Cluster'
-			addCluster['absolute_url'] = url + '?pagetype=' + HOMEBASE_ADD_CLUSTER_INITIAL
+			addCluster['absolute_url'] = '%s?pagetype=%s' % (url, HOMEBASE_ADD_CLUSTER_INITIAL)
 			addCluster['Description'] = 'Add an existing cluster to the Luci cluster management interface.'
 			if pagetype == HOMEBASE_ADD_CLUSTER_INITIAL or pagetype == HOMEBASE_ADD_CLUSTER:
 				addCluster['currentItem'] = True
@@ -1027,7 +999,7 @@
 		if (havePermRemStorage(self) and havePermRemCluster(self) and (getStorage(self) or getClusters(self))):
 			remSystem = {}
 			remSystem['Title'] = 'Manage Systems'
-			remSystem['absolute_url'] = url + '?pagetype=' + HOMEBASE_DEL_SYSTEM
+			remSystem['absolute_url'] = '%s?pagetype=%s' % (url, HOMEBASE_DEL_SYSTEM)
 			remSystem['Description'] = 'Update or remove storage systems and clusters.'
 			if pagetype == HOMEBASE_DEL_SYSTEM:
 				remSystem['currentItem'] = True
@@ -1037,7 +1009,8 @@
 				remSystem['currentItem'] = False
 			index += 1
 			temp.append(remSystem)
-	except: pass
+	except:
+		pass
 
 #
 # Add a Luci user.
@@ -1047,7 +1020,7 @@
 		if havePermAddUser(self):
 			addUser = {}
 			addUser['Title'] = 'Add a User'
-			addUser['absolute_url'] = url + '?pagetype=' + HOMEBASE_ADD_USER
+			addUser['absolute_url'] = '%s?pagetype=%s' % (url, HOMEBASE_ADD_USER)
 			addUser['Description'] = 'Add a user to the Luci interface.'
 			if pagetype == HOMEBASE_ADD_USER:
 				addUser['currentItem'] = True
@@ -1057,7 +1030,8 @@
 				addUser['currentItem'] = False
 			index += 1
 			temp.append(addUser)
-	except: pass
+	except:
+		pass
 
 #
 # Delete a Luci user
@@ -1067,7 +1041,7 @@
 		if (self.portal_membership.listMembers() and havePermDelUser(self)):
 			delUser = {}
 			delUser['Title'] = 'Delete a User'
-			delUser['absolute_url'] = url + '?pagetype=' + HOMEBASE_DEL_USER
+			delUser['absolute_url'] = '%s?pagetype=%s' % (url, HOMEBASE_DEL_USER)
 			delUser['Description'] = 'Delete a Luci user.'
 			if pagetype == HOMEBASE_DEL_USER:
 				delUser['currentItem'] = True
@@ -1086,7 +1060,7 @@
 		if (havePermEditPerms(self) and self.portal_membership.listMembers() and (getStorage(self) or getClusters(self))):
 			userPerm = {}
 			userPerm['Title'] = 'User Permissions'
-			userPerm['absolute_url'] = url + '?pagetype=' + HOMEBASE_PERMS
+			userPerm['absolute_url'] = '%s?pagetype=%s' % (url, HOMEBASE_PERMS)
 			userPerm['Description'] = 'Set permissions for Luci users.'
 			if pagetype == HOMEBASE_PERMS:
 				userPerm['currentItem'] = True
@@ -1102,9 +1076,13 @@
 		ret['curIndex'] = 0
 
 	if cur and 'absolute_url' in cur and params:
+		import cgi
 		cur['base_url'] = cur['absolute_url']
+		param_list = list()
 		for i in params:
-			cur['absolute_url'] += '&' + cgi.escape(i) + '=' + cgi.escape(params[i])
+			param_list.append('&%s=%s' % (cgi.escape(i), cgi.escape(params[i])))
+		temp = '%s%s' % (cur['absolute_url'], ''.join(param_list))
+		cur['absolute_url'] = temp
 	elif cur and 'absolute_url' in cur:
 		cur['base_url'] = cur['absolute_url']
 	else:
@@ -1114,111 +1092,9 @@
 	ret['children'] = temp
 	return ret
 
-def getClusterSystems(self, clusterName):
-	if isAdmin(self):
-		try:
-			return self.restrictedTraverse(CLUSTER_FOLDER_PATH + clusterName + '/objectItems')('Folder')
-		except Exception, e:
-			luci_log.debug_verbose('GCSy0: %s: %s' % (clusterName, str(e)))
-			return None
-
-	try:
-		i = getSecurityManager().getUser()
-		if not i:
-			raise Exception, 'security manager says no user'
-	except Exception, e:
-		luci_log.debug_verbose('GCSy1: %s: %s' % (clusterName, str(e)))
-		return None
-
-	try:
-		csystems = self.restrictedTraverse(CLUSTER_FOLDER_PATH + clusterName + '/objectItems')('Folder')
-		if not csystems or len(csystems) < 1:
-			return None
-	except Exception, e:
-		luci_log.debug_verbose('GCSy2: %s: %s' % (clusterName, str(e)))
-		return None
-
-	allowedCSystems = list()
-	for c in csystems:
-		try:
-			if i.has_role('View', c[1]):
-				allowedCSystems.append(c)
-		except Exception, e:
-			luci_log.debug_verbose('GCSy3: %s: %s: %s' \
-				% (clusterName, c[0], str(e)))
-
-	return allowedCSystems
-
-def getClusters(self):
-	if isAdmin(self):
-		try:
-			return self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/objectItems')('Folder')
-		except Exception, e:
-			luci_log.debug_verbose('GC0: %s' % str(e))
-			return None
-	try:
-		i = getSecurityManager().getUser()
-		if not i:
-			raise Exception, 'GSMGU failed'
-	except Exception, e:
-		luci_log.debug_verbose('GC1: %s' % str(e))
-		return None
-
-	try:
-		clusters = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/objectItems')('Folder')
-		if not clusters or len(clusters) < 1:
-			return None
-	except Exception, e:
-		luci_log.debug_verbose('GC2: %s' % str(e))
-		return None
-
-	allowedClusters = list()
-	for c in clusters:
-		try:
-			if i.has_role('View', c[1]):
-				allowedClusters.append(c)
-		except Exception, e:
-			luci_log.debug_verbose('GC3: %s: %s' % (c[0], str(e)))
-
-	return allowedClusters
-
-def getStorage(self):
-	if isAdmin(self):
-		try:
-			return self.restrictedTraverse(PLONE_ROOT + '/systems/storage/objectItems')('Folder')
-		except Exception, e:
-			luci_log.debug_verbose('GS0: %s' % str(e))
-			return None
-
-	try:
-		i = getSecurityManager().getUser()
-		if not i:
-			raise Exception, 'GSMGU failed'
-	except Exception, e:
-		luci_log.debug_verbose('GS1: %s' % str(e))
-		return None
-
-	try:
-		storage = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/objectItems')('Folder')
-		if not storage or len(storage) < 1:
-			return None
-	except Exception, e:
-		luci_log.debug_verbose('GS2: %s' % str(e))
-		return None
-
-	allowedStorage = list()
-	for s in storage:
-		try:
-			if i.has_role('View', s[1]):
-				allowedStorage.append(s)
-		except Exception, e:
-			luci_log.debug_verbose('GS3: %s' % str(e))
-
-	return allowedStorage
-
 def createSystem(self, host, passwd):
 	try:
-		dummy = self.restrictedTraverse(str(STORAGE_FOLDER_PATH + host)).objectItems()
+		dummy = self.restrictedTraverse('%s%s' % (STORAGE_FOLDER_PATH, host)).objectItems()
 		luci_log.debug_verbose('CS0: %s already exists' % host)
 		return 'Storage system %s is already managed' % host
 	except:
@@ -1249,7 +1125,7 @@
 		return 'Authentication for storage system %s failed' % host
 
 	try:
-		dummy = self.restrictedTraverse(str(STORAGE_FOLDER_PATH + host)).objectItems()
+		dummy = self.restrictedTraverse('%s%s' % (STORAGE_FOLDER_PATH, host)).objectItems()
 		luci_log.debug_verbose('CS4 %s already exists' % host)
 		return 'Storage system %s is already managed' % host
 	except:
@@ -1263,7 +1139,7 @@
 
 	try:
 		ssystem.manage_addFolder(host, '__luci__:system')
-		newSystem = self.restrictedTraverse(STORAGE_FOLDER_PATH + host)
+		newSystem = self.restrictedTraverse('%s%s' % (STORAGE_FOLDER_PATH, host))
 	except Exception, e:
 		luci_log.debug_verbose('CS6 %s: %s' % (host, str(e)))
 		return 'Unable to create DB entry for storage system %s' % host
@@ -1277,283 +1153,6 @@
 
 	return None
 
-def abortManageCluster(self, request):
-	pass
-
-def manageCluster(self, clusterName, node_list, cluster_os):
-	clusterName = str(clusterName)
-
-	try:
-		clusters = self.restrictedTraverse(CLUSTER_FOLDER_PATH)
-		if not clusters:
-			raise Exception, 'cannot find the cluster entry in the DB'
-	except Exception, e:
-		luci_log.debug_verbose('MC0: %s: %s' % (clusterName, str(e)))
-		return 'Unable to create cluster %s: the cluster directory is missing.' % clusterName
-
-	try:
-		newCluster = self.restrictedTraverse(CLUSTER_FOLDER_PATH + clusterName)
-		if newCluster:
-			luci_log.debug_verbose('MC1: cluster %s: already exists' % clusterName)
-			return 'A cluster named %s is already managed by Luci' % clusterName
-	except:
-		pass
-
-	try:
-		clusters.manage_addFolder(clusterName, '__luci__:cluster')
-		newCluster = self.restrictedTraverse(CLUSTER_FOLDER_PATH + clusterName)
-		if not newCluster:
-			raise Exception, 'unable to create the cluster DB entry for %s' % clusterName
-	except Exception, e:
-		luci_log.debug_verbose('MC2: %s: %s' % (clusterName, str(e)))
-		return 'Unable to create cluster %s: %s' % (clusterName, str(e))
-
-	try:
-		newCluster.manage_acquiredPermissions([])
-		newCluster.manage_role('View', ['Access Contents Information', 'View'])
-	except Exception, e:
-		luci_log.debug_verbose('MC3: %s: %s' % (clusterName, str(e)))
-		try:
-			clusters.manage_delObjects([clusterName])
-		except Exception, e:
-			luci_log.debug_verbose('MC4: %s: %s' % (clusterName, str(e)))
-		return 'Unable to set permissions on new cluster: %s: %s' % (clusterName, str(e))
-
-	try:
-		newCluster.manage_addProperty('cluster_os', cluster_os, 'string')
-	except Exception, e:
-		luci_log.debug_verbose('MC5: %s: %s: %s' \
-			% (clusterName, cluster_os, str(e)))
-
-	for i in node_list:
-		host = node_list[i]['host']
-
-		try:
-			newCluster.manage_addFolder(host, '__luci__:csystem:' + clusterName)
-			newSystem = self.restrictedTraverse(str(CLUSTER_FOLDER_PATH + clusterName + '/' + host))
-			if not newSystem:
-				raise Exception, 'unable to create cluster system DB entry for node %s' % host
-			newSystem.manage_acquiredPermissions([])
-			newSystem.manage_role('View', [ 'Access contents information' , 'View' ])
-		except Exception, e:
-			try:
-				clusters.manage_delObjects([clusterName])
-			except Exception, e:
-				luci_log.debug_verbose('MC6: %s: %s: %s' \
-					% (clusterName, host, str(e)))
-
-			luci_log.debug_verbose('MC7: %s: %s: %s' \
-				% (clusterName, host, str(e)))
-			return 'Unable to create cluster node %s for cluster %s: %s' \
-				% (host, clusterName, str(e))
-
-	try:
-		ssystem = self.restrictedTraverse(STORAGE_FOLDER_PATH)
-		if not ssystem:
-			raise Exception, 'The storage DB entry is missing'
-	except Exception, e:
-		luci_log.debug_verbose('MC8: %s: %s: %s' % (clusterName, host, str(e)))
-		return 'Error adding storage node %s: %s' % (host, str(e))
-
-	# Only add storage systems if the cluster and cluster node DB
-	# objects were added successfully.
-	for i in node_list:
-		host = node_list[i]['host']
-
-		try:
-			# It's already there, as a storage system, no problem.
-			dummy = self.restrictedTraverse(str(STORAGE_FOLDER_PATH + host)).objectItems()
-			continue
-		except:
-			pass
-
-		try:
-			ssystem.manage_addFolder(host, '__luci__:system')
-			newSystem = self.restrictedTraverse(STORAGE_FOLDER_PATH + host)
-			newSystem.manage_acquiredPermissions([])
-			newSystem.manage_role('View', [ 'Access contents information' , 'View' ])
-		except Exception, e:
-			luci_log.debug_verbose('MC9: %s: %s: %s' % (clusterName, host, str(e)))
-
-def createClusterSystems(self, clusterName, node_list):
-	try:
-		clusterObj = self.restrictedTraverse(CLUSTER_FOLDER_PATH + clusterName)
-		if not clusterObj:
-			raise Exception, 'cluster %s DB entry is missing' % clusterName
-	except Exception, e:
-		luci_log.debug_verbose('CCS0: %s: %s' % (clusterName, str(e)))
-		return 'No cluster named \"%s\" is managed by Luci' % clusterName
-
-	for x in node_list:
-		i = node_list[x]
-		host = str(i['host'])
-
-		try:
-			clusterObj.manage_addFolder(host, '__luci__:csystem:' + clusterName)
-		except Exception, e:
-			luci_log.debug_verbose('CCS0a: %s: %s: %s' % (clusterName, host, str(e)))
-		try:
-			newSystem = self.restrictedTraverse(CLUSTER_FOLDER_PATH + clusterName + '/' + host)
-			if not newSystem:
-				raise Exception, 'cluster node DB entry for %s disappeared from under us' % host
-					
-			newSystem.manage_acquiredPermissions([])
-			newSystem.manage_role('View', [ 'Access contents information' , 'View' ])
-		except Exception, e:
-			luci_log.debug_verbose('CCS1: %s: %s: %s' % (clusterName, host, str(e)))
-			return 'Unable to create cluster node %s for cluster %s: %s' \
-				% (host, clusterName, str(e))
-
-	try:
-		ssystem = self.restrictedTraverse(STORAGE_FOLDER_PATH)
-		if not ssystem:
-			raise Exception, 'storage DB entry is missing'
-	except Exception, e:
-		# This shouldn't fail, but if it does, it's harmless right now
-		luci_log.debug_verbose('CCS2: %s: %s' % (clusterName, host, str(e)))
-		return None
-
-	# Only add storage systems if the and cluster node DB
-	# objects were added successfully.
-	for x in node_list:
-		i = node_list[x]
-		host = str(i['host'])
-
-		try:
-			# It's already there, as a storage system, no problem.
-			dummy = self.restrictedTraverse(str(STORAGE_FOLDER_PATH + host)).objectItems()
-			continue
-		except:
-			pass
-
-		try:
-			ssystem.manage_addFolder(host, '__luci__:system')
-			newSystem = self.restrictedTraverse(STORAGE_FOLDER_PATH + host)
-			newSystem.manage_acquiredPermissions([])
-			newSystem.manage_role('View', [ 'Access contents information' , 'View' ])
-		except Exception, e:
-			luci_log.debug_verbose('CCS3: %s: %s' % (clusterName, host, str(e)))
-
-def delSystem(self, systemName):
-	try:
-		ssystem = self.restrictedTraverse(STORAGE_FOLDER_PATH)
-		if not ssystem:
-			raise Exception, 'storage DB entry is missing'
-	except Exception, e:
-		luci_log.debug_verbose('delSystem0: %s: %s' % (systemName, str(e)))
-		return 'Unable to find storage system %s: %s' % (systemName, str(e))
-
-	try:
-		rc = RicciCommunicator(systemName, enforce_trust=False)
-		if rc is None:
-			raise Exception, 'rc is None'
-	except Exception, e:
-		try:
-			ssystem.manage_delObjects([ systemName ])
-		except Exception, e:
-			luci_log.debug_verbose('delSystem1: %s: %s' % (systemName, str(e)))
-			return 'Unable to delete the storage system %s' % systemName
-		luci_log.debug_verbose('delSystem2: %s: %s' % (systemName, str(e)))
-		return
-
-	# Only unauthenticate if the system isn't a member of
-	# a managed cluster.
-	cluster_info = rc.cluster_info()
-	if not cluster_info:
-		cluster_name = None
-	elif not cluster_info[0]:
-		cluster_name = cluster_info[1]
-	else:
-		cluster_name = cluster_info[0]
-
-	unauth = False
-	if not cluster_name:
-		# If it's a member of no cluster, unauthenticate
-		unauth = True
-	else:
-		try:
-			dummy = self.restrictedTraverse(str(CLUSTER_FOLDER_PATH + cluster_name + '/' + systemName)).objectItems()
-		except Exception, e:
-			# It's not a member of a managed cluster, so unauthenticate.
-			unauth = True
-
-	if unauth is True:
-		try:
-			rc.unauth()
-		except:
-			pass
-
-	try:
-		ssystem.manage_delObjects([ systemName ])
-	except Exception, e:
-		luci_log.debug_verbose('delSystem3: %s: %s' % (systemName, str(e)))
-		return 'Unable to delete storage system %s: %s' \
-			% (systemName, str(e))
-
-def delCluster(self, clusterName):
-	try:
-		clusters = self.restrictedTraverse(CLUSTER_FOLDER_PATH)
-		if not clusters:
-			raise Exception, 'clusters DB entry is missing'
-	except Exception, e:
-		luci_log.debug_verbose('delCluster0: %s' % str(e))
-		return 'Unable to find cluster %s' % clusterName
-
-	err = delClusterSystems(self, clusterName)
-	if err:
-		return err
-
-	try:
-		clusters.manage_delObjects([ clusterName ])
-	except Exception, e:
-		luci_log.debug_verbose('delCluster1: %s' % str(e))
-		return 'Unable to delete cluster %s' % clusterName
-
-def delClusterSystem(self, cluster, systemName):
-	try:
-		dummy = self.restrictedTraverse(str(STORAGE_FOLDER_PATH + systemName)).objectItems()
-	except:
-		# It's not a storage system, so unauthenticate.
-		try:
-			rc = RicciCommunicator(systemName, enforce_trust=False)
-			rc.unauth()
-		except Exception, e:
-			luci_log.debug_verbose('delClusterSystem0: ricci error for %s: %s' \
-				% (systemName, str(e)))
-
-	try:
-		cluster.manage_delObjects([ systemName ])
-	except Exception, e:
-		err_str = 'Error deleting cluster object %s: %s' % (systemName, str(e))
-		luci_log.debug_verbose('delClusterSystem1: %s' % err_str)
-		return err_str
-
-def delClusterSystems(self, clusterName):
-	try:
-		cluster = self.restrictedTraverse(CLUSTER_FOLDER_PATH + clusterName)
-		if not cluster:
-			raise Exception, 'cluster DB entry is missing'
-
-		try:
-			csystems = getClusterSystems(self, clusterName)
-			if not csystems or len(csystems) < 1:
-				return None
-		except Exception, e:
-			luci_log.debug_verbose('delCluSystems0: %s' % str(e))
-			return None
-	except Exception, er:
-		luci_log.debug_verbose('delCluSystems1: error for %s: %s' \
-			% (clusterName, str(er)))
-		return str(er)
-
-	errors = ''
-	for i in csystems:
-		err = delClusterSystem(self, cluster, i[0])
-		if err:
-			errors += 'Unable to delete the cluster system %s: %s\n' % (i[0], err)
-			luci_log.debug_verbose('delCluSystems2: %s' % err)
-	return errors
-
 def getDefaultUser(self, request):
 	try:
 		user = request.form['userList']
@@ -1595,8 +1194,8 @@
 		perms[userName]['storage'] = {}
 
 		try:
-			clusters = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/objectItems')('Folder')
-			storage = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/objectItems')('Folder')
+			clusters = self.restrictedTraverse('%s/systems/cluster/objectItems' % PLONE_ROOT)('Folder')
+			storage = self.restrictedTraverse('%s/systems/storage/objectItems' % PLONE_ROOT)('Folder')
 		except Exception, e:
 			luci_log.debug_verbose('getUserPerms1: user %s: %s' % (userName, str(e)))
 			continue
@@ -1607,7 +1206,6 @@
 			except Exception, e:
 				luci_log.debug_verbose('getUserPerms2: user %s, obj %s: %s' \
 					% (userName, c[0], str(e)))
-				continue
 				
 		for s in storage:
 			try:
@@ -1615,125 +1213,8 @@
 			except Exception, e:
 				luci_log.debug_verbose('getUserPerms2: user %s, obj %s: %s' \
 					% (userName, s[0], str(e)))
-				continue
-
 	return perms
 
-# In case we want to give access to non-admin users in the future
-
-def havePermCreateCluster(self):
-	return isAdmin(self)
-
-def havePermAddStorage(self):
-	return isAdmin(self)
-
-def havePermAddCluster(self):
-	return isAdmin(self)
-
-def havePermAddUser(self):
-	return isAdmin(self)
-
-def havePermDelUser(self):
-	return isAdmin(self)
-
-def havePermRemStorage(self):
-	return isAdmin(self) 
-
-def havePermRemCluster(self):
-	return isAdmin(self) 
-
-def havePermEditPerms(self):
-	return isAdmin(self) 
-
-def getClusterConfNodes(clusterConfDom):
-	cur = clusterConfDom
-	clusterNodes = list()
-
-	for i in cur.childNodes:
-		cur = i
-		if i.nodeName == 'clusternodes':
-			for i in cur.childNodes:
-				if i.nodeName == 'clusternode':
-					clusterNodes.append(i.getAttribute('name'))
-			return clusterNodes
-	return clusterNodes
-
-def getSystems(self):
-	storage = getStorage(self)
-	clusters = getClusters(self)
-	storageList = list()
-	ret = [{}, [], {}]
-
-	need_auth_hash = {}
-	for i in storage:
-		storageList.append(i[0])
-		if testNodeFlag(i[1], CLUSTER_NODE_NEED_AUTH) != False:
-			need_auth_hash[i[0]] = i[1]
-
-	chash = {}
-	for i in clusters:
-		csystems = getClusterSystems(self, i[0])
-		cslist = list()
-		for c in csystems:
-			if testNodeFlag(c[1], CLUSTER_NODE_NEED_AUTH) != False:
-				need_auth_hash[c[0]] = c[1]
-			cslist.append(c[0])
-		chash[i[0]] = cslist
-
-	ret[0] = chash
-	ret[1] = storageList
-	ret[2] = need_auth_hash
-	return ret
-
-def getClusterNode(self, nodename, clustername):
-	try:
-		cluster_node = self.restrictedTraverse(CLUSTER_FOLDER_PATH + str(clustername) + '/' + str(nodename))
-		if not cluster_node:
-			raise Exception, 'cluster node is none'
-		return cluster_node
-	except Exception, e:
-		luci_log.debug_verbose('getClusterNode0: %s %s: %s' \
-			% (nodename, clustername, str(e)))
-		return None
-
-def getStorageNode(self, nodename):
-	try:
-		storage_node = self.restrictedTraverse(STORAGE_FOLDER_PATH + str(nodename))
-		if not storage_node:
-			raise Exception, 'storage node is none'
-		return storage_node
-	except Exception, e:
-		luci_log.debug_verbose('getStorageNode0: %s: %s' % (nodename, str(e)))
-		return None
-
-def testNodeFlag(node, flag_mask):
-	try:
-		flags = node.getProperty('flags')
-		if flags is None:
-			return False
-		return flags & flag_mask != 0
-	except Exception, e:
-		luci_log.debug_verbose('testNodeFlag0: %s' % str(e))
-	return False
-
-def setNodeFlag(node, flag_mask):
-	try:
-		flags = node.getProperty('flags')
-		if flags is None:
-			flags = 0
-		node.manage_changeProperties({ 'flags': flags | flag_mask })
-	except:
-		try:
-			node.manage_addProperty('flags', flag_mask, 'int')
-		except Exception, e:
-			luci_log.debug_verbose('setNodeFlag0: %s' % str(e))
-
-def delNodeFlag(node, flag_mask):
-	try:
-		flags = node.getProperty('flags')
-		if flags is None:
-			return
-		if flags & flag_mask != 0:
-			node.manage_changeProperties({ 'flags': flags & ~flag_mask })
-	except Exception, e:
-		luci_log.debug_verbose('delNodeFlag0: %s' % str(e))
+def getClusterConfNodes(conf_dom):
+	cluster_nodes = conf_dom.getElementsByTagName('clusternodes')
+	return (lambda x: str(x.getAttribute('name')), cluster_nodes)
--- conga/luci/site/luci/Extensions/ricci_communicator.py	2007/02/12 20:24:28	1.25
+++ conga/luci/site/luci/Extensions/ricci_communicator.py	2007/05/03 20:16:38	1.25.2.1
@@ -24,8 +24,8 @@
         self.__timeout_short = 6
         self.__timeout_long  = 600
         
-        self.__privkey_file = CERTS_DIR_PATH + 'privkey.pem'
-        self.__cert_file = CERTS_DIR_PATH + 'cacert.pem'
+        self.__privkey_file = '%sprivkey.pem' % CERTS_DIR_PATH
+        self.__cert_file = '%scacert.pem' % CERTS_DIR_PATH
         
         try:
             self.ss = SSLSocket(self.__hostname,
@@ -152,7 +152,7 @@
         except:
             errstr = 'Error authenticating to host %s: %s' \
                         % (self.__hostname, str(ret))
-            luci_log.debug_verbose('RC:unauth2:' + errstr)
+            luci_log.debug_verbose('RC:unauth2: %s' % errstr)
             raise RicciError, errstr
         return True
 
@@ -212,31 +212,28 @@
                     batch_node = node.cloneNode(True)
         if batch_node == None:
             luci_log.debug_verbose('RC:PB4: batch node missing <batch/>')
-            raise RicciError, 'missing <batch/> in ricci\'s response from %s' \
+            raise RicciError, 'missing <batch/> in ricci\'s response from "%s"' \
                     % self.__hostname
 
         return batch_node
     
     def batch_run(self, batch_str, async=True):
         try:
-            batch_xml_str = '<?xml version="1.0" ?><batch>' + batch_str + '</batch>'
-            luci_log.debug_verbose('RC:BRun0: attempting batch \"%s\" for host %s' \
-                % (batch_xml_str, self.__hostname))
+            batch_xml_str = '<?xml version="1.0" ?><batch>%s</batch>' % batch_str
+            luci_log.debug_verbose('RC:BRun0: attempting batch "%s" for host "%s"' % (batch_xml_str, self.__hostname))
             batch_xml = minidom.parseString(batch_xml_str).firstChild
         except Exception, e:
-            luci_log.debug_verbose('RC:BRun1: received invalid batch XML for %s: \"%s\": %s' \
-                % (self.__hostname, batch_xml_str, str(e)))
+            luci_log.debug_verbose('RC:BRun1: received invalid batch XML for %s: "%s": "%s"' % (self.__hostname, batch_xml_str, str(e)))
             raise RicciError, 'batch XML is malformed'
 
         try:
             ricci_xml = self.process_batch(batch_xml, async)
             try:
-                luci_log.debug_verbose('RC:BRun2: received XML \"%s\" from host %s in response to batch command.' \
-                    % (ricci_xml.toxml(), self.__hostname))
+                luci_log.debug_verbose('RC:BRun2: received XML "%s" from host %s in response to batch command.' % (ricci_xml.toxml(), self.__hostname))
             except:
                 pass
         except:
-            luci_log.debug_verbose('RC:BRun3: An error occurred while trying to process the batch job: \"%s\"' % batch_xml_str)
+            luci_log.debug_verbose('RC:BRun3: An error occurred while trying to process the batch job: "%s"' % batch_xml_str)
             return None
 
         doc = minidom.Document()
@@ -244,8 +241,7 @@
         return doc
 
     def batch_report(self, batch_id):
-        luci_log.debug_verbose('RC:BRep0: [auth=%d] asking for batchid# %s for host %s' \
-            % (self.__authed, batch_id, self.__hostname))
+        luci_log.debug_verbose('RC:BRep0: [auth=%d] asking for batchid# %s for host %s' % (self.__authed, batch_id, self.__hostname))
 
         if not self.authed():
             raise RicciError, 'Not authenticated to host %s' % self.__hostname
@@ -282,19 +278,18 @@
     
     
     def __send(self, xml_doc, timeout):
-        buff = xml_doc.toxml() + '\n'
+        buff = '%s\n' % xml_doc.toxml()
         try:
             self.ss.send(buff, timeout)
         except Exception, e:
-            luci_log.debug_verbose('RC:send0: Error sending XML \"%s\" to %s: %s' \
-                                   % (buff, self.__hostname, str(e)))
+            luci_log.debug_verbose('RC:send0: Error sending XML "%s" to %s: %s' % (buff, self.__hostname, str(e)))
             raise RicciError, 'write error while sending XML to host %s' \
                   % self.__hostname
         except:
             raise RicciError, 'write error while sending XML to host %s' \
                   % self.__hostname
         try:
-            luci_log.debug_verbose('RC:send1: Sent XML \"%s\" to host %s' \
+            luci_log.debug_verbose('RC:send1: Sent XML "%s" to host %s' \
                 % (xml_doc.toxml(), self.__hostname))
         except:
             pass
@@ -311,14 +306,14 @@
             raise RicciError, 'Error reading data from host %s' % self.__hostname
         except:
             raise RicciError, 'Error reading data from host %s' % self.__hostname
-        luci_log.debug_verbose('RC:recv1: Received XML \"%s\" from host %s' \
+        luci_log.debug_verbose('RC:recv1: Received XML "%s" from host %s' \
             % (xml_in, self.__hostname))
 
         try:
             if doc == None:
                 doc = minidom.parseString(xml_in)
         except Exception, e:
-            luci_log.debug_verbose('RC:recv2: Error parsing XML \"%s" from %s' \
+            luci_log.debug_verbose('RC:recv2: Error parsing XML "%s" from %s' \
                 % (xml_in, str(e)))
             raise RicciError, 'Error parsing XML from host %s: %s' \
                     % (self.__hostname, str(e))
@@ -330,9 +325,8 @@
         
         try:        
             if doc.firstChild.nodeName != 'ricci':
-                luci_log.debug_verbose('RC:recv3: Expecting \"ricci\" got XML \"%s\" from %s' %
-                    (xml_in, self.__hostname))
-                raise Exception, 'Expecting first XML child node to be \"ricci\"'
+                luci_log.debug_verbose('RC:recv3: Expecting "ricci" got XML "%s" from %s' % (xml_in, self.__hostname))
+                raise Exception, 'Expecting first XML child node to be "ricci"'
         except Exception, e:
             raise RicciError, 'Invalid XML ricci response from host %s' \
                     % self.__hostname
@@ -348,8 +342,7 @@
     try:
         return RicciCommunicator(hostname)
     except Exception, e:
-        luci_log.debug_verbose('RC:GRC0: Error creating a ricci connection to %s: %s' \
-            % (hostname, str(e)))
+        luci_log.debug_verbose('RC:GRC0: Error creating a ricci connection to %s: %s' % (hostname, str(e)))
         return None
     pass
 
@@ -418,8 +411,7 @@
                     last = last + 1
                     last = last - 2 * last
     try:
-        luci_log.debug_verbose('RC:BS1: Returning (%d, %d) for batch_status(\"%s\")' \
-            % (last, total, batch_xml.toxml()))
+        luci_log.debug_verbose('RC:BS1: Returning (%d, %d) for batch_status("%s")' % (last, total, batch_xml.toxml()))
     except:
         luci_log.debug_verbose('RC:BS2: Returning last, total')
 
@@ -447,7 +439,7 @@
 # * error_msg:  error message
 def extract_module_status(batch_xml, module_num=1):
     if batch_xml.nodeName != 'batch':
-        luci_log.debug_verbose('RC:EMS0: Expecting \"batch\" got \"%s\"' % batch_xml.toxml())
+        luci_log.debug_verbose('RC:EMS0: Expecting "batch" got "%s"' % batch_xml.toxml())
         raise RicciError, 'Invalid XML node; expecting a batch node'
 
     c = 0
@@ -491,5 +483,5 @@
                     elif status == '5':
                         return -103, 'module removed from schedule'
     
-    raise RicciError, str('no ' + str(module_num) + 'th module in the batch, or malformed response')
+    raise RicciError, 'no %dth module in the batch, or malformed response' % module_num
 
--- conga/luci/site/luci/Extensions/ricci_defines.py	2006/05/30 20:17:21	1.1
+++ conga/luci/site/luci/Extensions/ricci_defines.py	2007/05/03 20:16:38	1.1.8.1
@@ -1,14 +1,11 @@
+REQUEST_TAG   = 'request'
+RESPONSE_TAG  = 'response'
 
+FUNC_CALL_TAG = 'function_call'
+FUNC_RESP_TAG = 'function_response'
+SEQUENCE_TAG  = 'sequence'
 
-REQUEST_TAG   ='request'
-RESPONSE_TAG  ='response'
-
-FUNC_CALL_TAG ="function_call"
-FUNC_RESP_TAG ="function_response"
-SEQUENCE_TAG  ='sequence'
-
-
-VARIABLE_TAG  ='var'
+VARIABLE_TAG  = 'var'
 
 VARIABLE_TYPE_INT        = 'int'
 VARIABLE_TYPE_INT_SEL    = 'int_select'
@@ -21,50 +18,46 @@
 VARIABLE_TYPE_LIST_STR   = 'list_str'
 VARIABLE_TYPE_LIST_XML   = 'list_xml'
 
-
 VARIABLE_TYPE_LISTENTRY  = 'listentry'
 VARIABLE_TYPE_FLOAT      = 'float'
 
 
-
-
-BD_TYPE = "block_device"
-BD_HD_TYPE = "hard_drive"
-BD_LV_TYPE = "logical_volume"
-BD_PARTITION_TYPE = "partition"
+BD_TYPE = 'block_device'
+BD_HD_TYPE = 'hard_drive'
+BD_LV_TYPE = 'logical_volume'
+BD_PARTITION_TYPE = 'partition'
 
 BD_TEMPLATE = 'block_device_template'
 
 
-
-MAPPER_TYPE           = "mapper"
-MAPPER_SYS_TYPE       = "hard_drives"
-MAPPER_VG_TYPE        = "volume_group"
-MAPPER_PT_TYPE        = "partition_table"
-MAPPER_MDRAID_TYPE    = "mdraid"
-MAPPER_ATARAID_TYPE   = "ataraid"
-MAPPER_MULTIPATH_TYPE = "multipath"
-MAPPER_CRYPTO_TYPE    = "crypto"
-MAPPER_iSCSI_TYPE     = "iSCSI"
-
-
-SYSTEM_PREFIX = MAPPER_SYS_TYPE + ":"
-VG_PREFIX     = MAPPER_VG_TYPE + ":"
-PT_PREFIX     = MAPPER_PT_TYPE + ":"
-MDRAID_PREFIX = MAPPER_MDRAID_TYPE + ':'
-
-
-MAPPER_SOURCES_TAG = "sources"
-MAPPER_TARGETS_TAG = "targets"
-MAPPER_MAPPINGS_TAG = "mappings"
-MAPPER_NEW_SOURCES_TAG = "new_sources"
-MAPPER_NEW_TARGETS_TAG = "new_targets"
+MAPPER_TYPE           = 'mapper'
+MAPPER_SYS_TYPE       = 'hard_drives'
+MAPPER_VG_TYPE        = 'volume_group'
+MAPPER_PT_TYPE        = 'partition_table'
+MAPPER_MDRAID_TYPE    = 'mdraid'
+MAPPER_ATARAID_TYPE   = 'ataraid'
+MAPPER_MULTIPATH_TYPE = 'multipath'
+MAPPER_CRYPTO_TYPE    = 'crypto'
+MAPPER_iSCSI_TYPE     = 'iSCSI'
+
+
+SYSTEM_PREFIX = ':%s' % MAPPER_SYS_TYPE
+VG_PREFIX     = ':%s' % MAPPER_VG_TYPE
+PT_PREFIX     = ':%s' % MAPPER_PT_TYPE
+MDRAID_PREFIX = ':%s' % MAPPER_MDRAID_TYPE
+
+
+MAPPER_SOURCES_TAG = 'sources'
+MAPPER_TARGETS_TAG = 'targets'
+MAPPER_MAPPINGS_TAG = 'mappings'
+MAPPER_NEW_SOURCES_TAG = 'new_sources'
+MAPPER_NEW_TARGETS_TAG = 'new_targets'
 
 
 
-CONTENT_TYPE = "content"
-CONTENT_FS_TYPE = "filesystem"
-CONTENT_NONE_TYPE = "none"
+CONTENT_TYPE = 'content'
+CONTENT_FS_TYPE = 'filesystem'
+CONTENT_NONE_TYPE = 'none'
 CONTENT_MS_TYPE = 'mapper_source'
 CONTENT_HIDDEN_TYPE = 'hidden'
 
@@ -75,7 +68,4 @@
 
 
 
-PROPS_TAG = "properties"
-
-
-
+PROPS_TAG = 'properties'
--- conga/luci/site/luci/Extensions/storage_adapters.py	2006/12/06 22:34:09	1.9
+++ conga/luci/site/luci/Extensions/storage_adapters.py	2007/05/03 20:16:38	1.9.4.1
@@ -44,7 +44,7 @@
   sdata = {}
   sdata['Title'] = "System List"
   sdata['cfg_type'] = "storages"
-  sdata['absolute_url'] = url + "?pagetype=" + STORAGESYS
+  sdata['absolute_url'] = "%s?pagetype=%s" % (url, STORAGESYS)
   sdata['Description'] = "Systems available for storage configuration"
   if pagetype == STORAGESYS or pagetype == '0':
     sdata['currentItem'] = True
@@ -56,7 +56,7 @@
     sdata['show_children'] = False
   
   
-  syslist= list()
+  syslist = list()
   if sdata['show_children']:
     #display_clusters = True
     display_clusters = False
@@ -97,11 +97,12 @@
   if 'nodes' in system_data:
     title = system_data['name']
     if system_data['alias'] != '':
-      title = system_data['alias'] + ' (' + title + ')'
-    ssys['Title'] = 'Cluster ' + title
+      title = '%s (%s)' % (system_data['alias'], title)
+    ssys['Title'] = 'Cluster %s' % title
     ssys['cfg_type'] = "storage"
-    ssys['absolute_url'] = url + '?' + PAGETYPE + '=' + CLUSTER_STORAGE + "&" + CLUNAME + "=" + system_data['name']
-    ssys['Description'] = "Configure shared storage of cluster " + title
+    ssys['absolute_url'] = '%s?%s=%s&%s=%s' \
+      % (url, PAGETYPE, CLUSTER_STORAGE, CLUNAME, system_data['name'])
+    ssys['Description'] = "Configure shared storage of cluster %s" % title
     ssys['currentItem'] = False
     ssys['show_children'] = True
     kids = []
@@ -117,8 +118,9 @@
       return
     ssys['Title'] = system_data['hostname']
     ssys['cfg_type'] = "storage"
-    ssys['absolute_url'] = url + '?' + PAGETYPE + '=' + STORAGE + "&" + STONAME + "=" + system_data['hostname']
-    ssys['Description'] = "Configure storage on " + system_data['hostname']
+    ssys['absolute_url'] = '%s?%s=%s&%s=%s' \
+      % (url, PAGETYPE, STORAGE, STONAME, system_data['hostname'])
+    ssys['Description'] = "Configure storage on %s" % system_data['hostname']
     
     if pagetype == STORAGE:
       if stoname == system_data['hostname']:
@@ -167,18 +169,18 @@
   
   
   buff, dummy1, dummy2 = get_pretty_mapper_info(mapper_type)
-  pretty_names = buff + 's'
-  pretty_names_desc = 'Manage ' + buff + 's'
-  pretty_name = buff
-  pretty_name_desc = 'Manage ' + buff
-  pretty_new_name = 'New ' + buff
-  pretty_new_name_desc = 'Create New ' + buff
+  pretty_names = '%ss' % buff
+  pretty_names_desc = 'Manage %ss' % buff
+  pretty_name_desc = 'Manage %s' % buff
+  pretty_new_name = 'New %s' % buff
+  pretty_new_name_desc = 'Create New %s' % buff
   
   
   srs_p = {}
   srs_p['Title'] = pretty_names
   srs_p['cfg_type'] = "nodes"
-  srs_p['absolute_url'] = url + '?' + PAGETYPE + '=' + VIEW_MAPPERS + '&' + STONAME + '=' + hostname + '&' + PT_MAPPER_TYPE + '=' + mapper_type
+  srs_p['absolute_url'] = '%s?%s=%s&%s=%s&%s=%s' \
+    % (url, PAGETYPE, VIEW_MAPPERS, STONAME, hostname, PT_MAPPER_TYPE, mapper_type)
   srs_p['Description'] = pretty_names_desc
   if (pagetype_req == VIEW_MAPPERS or pagetype_req == VIEW_MAPPER or pagetype_req == ADD_SOURCES or pagetype_req == CREATE_MAPPER or pagetype_req == VIEW_BD) and mapper_type_req == mapper_type:
     srs_p['show_children'] = True
@@ -196,7 +198,8 @@
     sr = {}
     sr['Title'] = pretty_new_name
     sr['cfg_type'] = "nodes"
-    sr['absolute_url'] = url + '?' + PAGETYPE + '=' + CREATE_MAPPER + '&' + STONAME + '=' + hostname + '&' + PT_MAPPER_TYPE + '=' + mapper_type
+    sr['absolute_url'] = '%s?%s=%s&%s=%s&%s=%s' \
+      % (url, PAGETYPE, CREATE_MAPPER, STONAME, hostname, PT_MAPPER_TYPE, mapper_type)
     sr['Description'] = pretty_new_name_desc
     sr['show_children'] = False
     
@@ -210,7 +213,7 @@
   # existing mappers
   for sr_xml in mapper_list:
     sr_id = sr_xml.getAttribute('mapper_id')
-    srname = sr_id.replace(mapper_type + ':', '').replace('/dev/', '')
+    srname = sr_id.replace('%s:' % mapper_type, '').replace('/dev/', '')
     
     if srname == '' and mapper_type == MAPPER_VG_TYPE and sr_id == VG_PREFIX:
       #srname = 'Uninitialized PVs'
@@ -219,7 +222,8 @@
     sr = {}
     sr['Title'] = srname
     sr['cfg_type'] = "nodes"
-    sr['absolute_url'] = url + '?' + PAGETYPE + '=' + VIEW_MAPPER + '&' + STONAME + '=' + hostname + '&' + PT_MAPPER_TYPE + '=' + mapper_type + '&' + PT_MAPPER_ID + '=' + sr_id
+    sr['absolute_url'] = '%s?%s=%s&%s=%s&%s=%s&%s=%s' \
+      % (url, PAGETYPE, VIEW_MAPPER, STONAME, hostname, PT_MAPPER_TYPE, mapper_type, PT_MAPPER_ID, sr_id)
     sr['Description'] = pretty_name_desc
     
     if (pagetype_req == VIEW_MAPPER or pagetype_req == ADD_SOURCES or pagetype_req == VIEW_BD) and mapper_id_req == sr_id:
@@ -238,7 +242,8 @@
       tg = {}
       tg['Title'] = tgname
       tg['cfg_type'] = "nodes"
-      tg['absolute_url'] = url + '?' + PAGETYPE + '=' + VIEW_BD + '&' + STONAME + '=' + hostname + '&' + PT_MAPPER_TYPE + '=' + mapper_type + '&' + PT_MAPPER_ID + '=' + sr_id + '&' + PT_PATH + '=' + tg_path
+      tg['absolute_url'] = '%s?%s=%s&%s=%s&%s=%s&%s=%s&%s=%s' \
+        % (url, PAGETYPE, VIEW_BD, STONAME, hostname, PT_MAPPER_TYPE, mapper_type, PT_MAPPER_ID, sr_id, PT_PATH, tg_path)
       tg['Description'] = tgname
       tg['show_children'] = False
       
@@ -294,8 +299,9 @@
   hds_p = {}
   hds_p['Title'] = hds_pretty_name
   hds_p['cfg_type'] = "nodes"
-  hds_p['absolute_url'] = url + '?' + PAGETYPE + '=' + VIEW_BDS + '&' + STONAME + '=' + hostname + '&' + PT_MAPPER_TYPE + '=' + MAPPER_SYS_TYPE + '&' + PT_MAPPER_ID + '=' + SYSTEM_PREFIX
-  hds_p['Description'] = "Manage " + hds_pretty_name
+  hds_p['absolute_url'] = '%s?%s=%s&%s=%s&%s=%s&%s=%s' \
+    % (url, PAGETYPE, VIEW_BDS, STONAME, hostname, PT_MAPPER_TYPE, MAPPER_SYS_TYPE, PT_MAPPER_ID, SYSTEM_PREFIX)
+  hds_p['Description'] = "Manage %s" % hds_pretty_name
   if (pagetype == VIEW_BDS or pagetype == VIEW_BD) and mapper_type == MAPPER_SYS_TYPE:
     hds_p['show_children'] = True
   else:
@@ -315,8 +321,9 @@
     hd = {}
     hd['Title'] = hd_path.replace('/dev/', '')
     hd['cfg_type'] = "nodes"
-    hd['absolute_url'] = url + '?' + PAGETYPE + '=' + VIEW_BD + '&' + STONAME + '=' + hostname + '&' + PT_MAPPER_TYPE + '=' + MAPPER_SYS_TYPE + '&' + PT_MAPPER_ID + '=' + sys_id + '&' + PT_PATH + '=' + hd_path
-    hd['Description'] = 'Manage ' + hd_pretty_name
+    hd['absolute_url'] = '%s?%s=%s&%s=%s&%s=%s&%s=%s&%s=%s' \
+      % (url, PAGETYPE, VIEW_BD, STONAME, hostname, PT_MAPPER_TYPE, MAPPER_SYS_TYPE, PT_MAPPER_ID, sys_id, PT_PATH, hd_path)
+    hd['Description'] = 'Manage %s' % hd_pretty_name
     hd['show_children'] = False
     
     if pagetype == VIEW_BD and mapper_id == sys_id and path == hd_path:
@@ -337,18 +344,18 @@
   mappers_dir = storage_report.get_mappers_dir()
   mapper_templs_dir = storage_report.get_mapper_temps_dir()
   glo_dir = {}
-  for type in mappers_dir:
-    glo_dir[type] = [mappers_dir[type], []]
-  for type in mapper_templs_dir:
-    if type not in glo_dir:
-      glo_dir[type] = [[], mapper_templs_dir[type]]
+  for cur_type in mappers_dir:
+    glo_dir[cur_type] = [mappers_dir[cur_type], []]
+  for cur_type in mapper_templs_dir:
+    if cur_type not in glo_dir:
+      glo_dir[cur_type] = [[], mapper_templs_dir[cur_type]]
     else:
-      glo_dir[type][1] = mapper_templs_dir[type]
+      glo_dir[cur_type][1] = mapper_templs_dir[cur_type]
   
-  for type in glo_dir:
-    if type == MAPPER_SYS_TYPE:
+  for cur_type in glo_dir:
+    if cur_type == MAPPER_SYS_TYPE:
       continue
-    item = create_mapper_subitem(storage_report, request, glo_dir[type][0], glo_dir[type][1])
+    item = create_mapper_subitem(storage_report, request, glo_dir[cur_type][0], glo_dir[cur_type][1])
     if item == None:
       continue
     else:
@@ -362,11 +369,7 @@
 def getStorageURL(self, request, hostname):
   # return URL to manage this storage system
   try:
-    url = request['URL']
+    baseurl = request['URL']
   except KeyError, e:
-    url = "."
-  
-  url += '?' + PAGETYPE + '=' + str(STORAGE)
-  url += '&' + STONAME + '=' + hostname
-  return url
-
+    baseurl = "."
+  return '%s?%s=%s&%s=%s' % (baseurl, PAGETYPE, str(STORAGE), STONAME, hostname)  
--- conga/luci/site/luci/Extensions/system_adapters.py	2007/02/24 07:02:42	1.2
+++ conga/luci/site/luci/Extensions/system_adapters.py	2007/05/03 20:16:38	1.2.2.1
@@ -1,5 +1,5 @@
 from ricci_communicator import RicciCommunicator
-from ricci_bridge import list_services, updateServices, svc_manage
+from RicciQueries import list_services, updateServices, svc_manage
 from LuciSyslog import LuciSyslog
 from xml.dom import minidom
 



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-03-15 16:41 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-03-15 16:41 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2007-03-15 16:41:11

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: NFSClient.py 

Log message:
	Fix NFS resource bugs.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.197&r2=1.198
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/NFSClient.py.diff?cvsroot=cluster&r1=1.1&r2=1.2

--- conga/luci/cluster/form-macros	2007/03/12 04:25:41	1.197
+++ conga/luci/cluster/form-macros	2007/03/15 16:41:11	1.198
@@ -4304,15 +4304,15 @@
 		<div metal:use-macro="here/resource-form-macros/macros/gfs_macro" />
 	</tal:block>
 
-	<tal:block tal:condition="python: type == 'nfsm'">
+	<tal:block tal:condition="python: type == 'netfs'">
 		<div metal:use-macro="here/resource-form-macros/macros/nfsm_macro"/>
 	</tal:block>
 
-	<tal:block tal:condition="python: type == 'nfsx'">
+	<tal:block tal:condition="python: type == 'nfsexport'">
 		<div metal:use-macro="here/resource-form-macros/macros/nfsx_macro"/>
 	</tal:block>
 
-	<tal:block tal:condition="python: type == 'nfsc'">
+	<tal:block tal:condition="python: type == 'nfsclient'">
 		<div metal:use-macro="here/resource-form-macros/macros/nfsc_macro"/>
 	</tal:block>
 
--- conga/luci/site/luci/Extensions/NFSClient.py	2006/05/30 20:17:21	1.1
+++ conga/luci/site/luci/Extensions/NFSClient.py	2007/03/15 16:41:11	1.2
@@ -5,7 +5,7 @@
 import gettext
 _ = gettext.gettext
 
-RESOURCE_TYPE=_("NFS Client: ")
+RESOURCE_TYPE=_("NFS Client")
 TAG_NAME = "nfsclient"
 DENY_ALL_CHILDREN = True
 



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-03-14 22:38 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-03-14 22:38 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL4
Changes by:	rmccabe at sourceware.org	2007-03-14 22:38:08

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: NFSClient.py 

Log message:
	Fix NFS resource bugs

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.176.2.16&r2=1.176.2.17
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/NFSClient.py.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.1&r2=1.1.4.1

--- conga/luci/cluster/form-macros	2007/03/12 04:24:33	1.176.2.16
+++ conga/luci/cluster/form-macros	2007/03/14 22:38:08	1.176.2.17
@@ -4304,15 +4304,15 @@
 		<div metal:use-macro="here/resource-form-macros/macros/gfs_macro" />
 	</tal:block>
 
-	<tal:block tal:condition="python: type == 'nfsm'">
+	<tal:block tal:condition="python: type == 'netfs'">
 		<div metal:use-macro="here/resource-form-macros/macros/nfsm_macro"/>
 	</tal:block>
 
-	<tal:block tal:condition="python: type == 'nfsx'">
+	<tal:block tal:condition="python: type == 'nfsexport'">
 		<div metal:use-macro="here/resource-form-macros/macros/nfsx_macro"/>
 	</tal:block>
 
-	<tal:block tal:condition="python: type == 'nfsc'">
+	<tal:block tal:condition="python: type == 'nfsclient'">
 		<div metal:use-macro="here/resource-form-macros/macros/nfsc_macro"/>
 	</tal:block>
 
--- conga/luci/site/luci/Extensions/NFSClient.py	2006/05/30 20:17:21	1.1
+++ conga/luci/site/luci/Extensions/NFSClient.py	2007/03/14 22:38:08	1.1.4.1
@@ -5,7 +5,7 @@
 import gettext
 _ = gettext.gettext
 
-RESOURCE_TYPE=_("NFS Client: ")
+RESOURCE_TYPE=_("NFS Client")
 TAG_NAME = "nfsclient"
 DENY_ALL_CHILDREN = True
 



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-03-14 22:37 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-03-14 22:37 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	rmccabe at sourceware.org	2007-03-14 22:37:42

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: NFSClient.py 

Log message:
	Fix NFS resource bugs

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.90.2.21&r2=1.90.2.22
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/NFSClient.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.1&r2=1.1.2.1

--- conga/luci/cluster/form-macros	2007/03/12 04:22:25	1.90.2.21
+++ conga/luci/cluster/form-macros	2007/03/14 22:37:41	1.90.2.22
@@ -4304,15 +4304,15 @@
 		<div metal:use-macro="here/resource-form-macros/macros/gfs_macro" />
 	</tal:block>
 
-	<tal:block tal:condition="python: type == 'nfsm'">
+	<tal:block tal:condition="python: type == 'netfs'">
 		<div metal:use-macro="here/resource-form-macros/macros/nfsm_macro"/>
 	</tal:block>
 
-	<tal:block tal:condition="python: type == 'nfsx'">
+	<tal:block tal:condition="python: type == 'nfsexport'">
 		<div metal:use-macro="here/resource-form-macros/macros/nfsx_macro"/>
 	</tal:block>
 
-	<tal:block tal:condition="python: type == 'nfsc'">
+	<tal:block tal:condition="python: type == 'nfsclient'">
 		<div metal:use-macro="here/resource-form-macros/macros/nfsc_macro"/>
 	</tal:block>
 
--- conga/luci/site/luci/Extensions/NFSClient.py	2006/05/30 20:17:21	1.1
+++ conga/luci/site/luci/Extensions/NFSClient.py	2007/03/14 22:37:41	1.1.2.1
@@ -5,7 +5,7 @@
 import gettext
 _ = gettext.gettext
 
-RESOURCE_TYPE=_("NFS Client: ")
+RESOURCE_TYPE=_("NFS Client")
 TAG_NAME = "nfsclient"
 DENY_ALL_CHILDREN = True
 



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-03-05 16:50 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-03-05 16:50 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2007-03-05 16:50:43

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 

Log message:
	Show cluster service start/relocate options for specific nodes on the main service list page
	Related: bz230466

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.195&r2=1.196
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.245&r2=1.246

--- conga/luci/cluster/form-macros	2007/02/28 21:54:05	1.195
+++ conga/luci/cluster/form-macros	2007/03/05 16:50:43	1.196
@@ -3915,6 +3915,8 @@
 								tal:condition="not: running"
 								tal:attributes="value svc/delurl | nothing"
 								tal:content="string:Delete this service" />
+							<option value="">----------</option>
+							<option tal:repeat="starturl svc/links" tal:attributes="value starturl/url">Start this service on <span tal:replace="starturl/nodename"/></option>
 						</select>
 						<input type="button" value="Go"
 							onclick="if (this.form.gourl[this.form.gourl.selectedIndex].value && confirm(this.form.gourl[this.form.gourl.selectedIndex].text + '?')) return dropdown(this.form.gourl)" />
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/03/01 20:22:29	1.245
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/03/05 16:50:43	1.246
@@ -3412,6 +3412,7 @@
 		baseurl = '/luci/cluster/index_html'
 
 	try:
+		nodes = model.getNodes()
 		cluname = req['clustername']
 		if not cluname:
 			raise KeyError, 'is blank'
@@ -3428,9 +3429,11 @@
 			itemmap = {}
 			itemmap['name'] = item['name']
 
+			cur_node = None
 			if item['running'] == "true":
+				cur_node = item['nodename']
 				itemmap['running'] = "true"
-				itemmap['nodename'] = item['nodename']
+				itemmap['nodename'] = cur_node
 				itemmap['disableurl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + item['name'] + "&pagetype=" + SERVICE_STOP
 				itemmap['restarturl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + item['name'] + "&pagetype=" + SERVICE_RESTART
 			else:
@@ -3438,6 +3441,15 @@
 
 			itemmap['autostart'] = item['autostart']
 
+			starturls = list()
+			for node in nodes:
+				starturl = {}
+				if node.getName() != cur_node:
+					starturl['nodename'] = node.getName()
+					starturl['url'] = baseurl + '?' + 'clustername=' + cluname +'&servicename=' + item['name'] + '&pagetype=' + SERVICE_START + '&nodename=' + node.getName()
+					starturls.append(starturl)
+					itemmap['links'] = starturls
+
 			try:
 				svc = model.retrieveServiceByName(item['name'])
 				itemmap['cfgurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + SERVICE



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-03-05 16:50 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-03-05 16:50 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	rmccabe at sourceware.org	2007-03-05 16:50:09

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 

Log message:
	Show cluster service start/relocate options for specific nodes on the main service list page
	Related: bz230466

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.90.2.19&r2=1.90.2.20
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.120.2.22&r2=1.120.2.23

--- conga/luci/cluster/form-macros	2007/03/01 00:31:08	1.90.2.19
+++ conga/luci/cluster/form-macros	2007/03/05 16:50:09	1.90.2.20
@@ -3915,6 +3915,8 @@
 								tal:condition="not: running"
 								tal:attributes="value svc/delurl | nothing"
 								tal:content="string:Delete this service" />
+							<option value="">----------</option>
+							<option tal:repeat="starturl svc/links" tal:attributes="value starturl/url">Start this service on <span tal:replace="starturl/nodename"/></option>
 						</select>
 						<input type="button" value="Go"
 							onclick="if (this.form.gourl[this.form.gourl.selectedIndex].value && confirm(this.form.gourl[this.form.gourl.selectedIndex].text + '?')) return dropdown(this.form.gourl)" />
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/03/01 20:22:33	1.120.2.22
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/03/05 16:50:09	1.120.2.23
@@ -3410,6 +3410,7 @@
 		baseurl = '/luci/cluster/index_html'
 
 	try:
+		nodes = model.getNodes()
 		cluname = req['clustername']
 		if not cluname:
 			raise KeyError, 'is blank'
@@ -3426,9 +3427,11 @@
 			itemmap = {}
 			itemmap['name'] = item['name']
 
+			cur_node = None
 			if item['running'] == "true":
+				cur_node = item['nodename']
 				itemmap['running'] = "true"
-				itemmap['nodename'] = item['nodename']
+				itemmap['nodename'] = cur_node
 				itemmap['disableurl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + item['name'] + "&pagetype=" + SERVICE_STOP
 				itemmap['restarturl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + item['name'] + "&pagetype=" + SERVICE_RESTART
 			else:
@@ -3436,6 +3439,15 @@
 
 			itemmap['autostart'] = item['autostart']
 
+			starturls = list()
+			for node in nodes:
+				starturl = {}
+				if node.getName() != cur_node:
+					starturl['nodename'] = node.getName()
+					starturl['url'] = baseurl + '?' + 'clustername=' + cluname +'&servicename=' + item['name'] + '&pagetype=' + SERVICE_START + '&nodename=' + node.getName()
+					starturls.append(starturl)
+			itemmap['links'] = starturls
+
 			try:
 				svc = model.retrieveServiceByName(item['name'])
 				itemmap['cfgurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + SERVICE



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-03-05 16:49 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-03-05 16:49 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL4
Changes by:	rmccabe at sourceware.org	2007-03-05 16:49:42

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 

Log message:
	Show cluster service start/relocate options for specific nodes on the main service list page
	Related: bz230466

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.176.2.14&r2=1.176.2.15
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.227.2.12&r2=1.227.2.13

--- conga/luci/cluster/form-macros	2007/02/28 21:42:22	1.176.2.14
+++ conga/luci/cluster/form-macros	2007/03/05 16:49:42	1.176.2.15
@@ -3915,6 +3915,8 @@
 								tal:condition="not: running"
 								tal:attributes="value svc/delurl | nothing"
 								tal:content="string:Delete this service" />
+							<option value="">----------</option>
+							<option tal:repeat="starturl svc/links" tal:attributes="value starturl/url">Start this service on <span tal:replace="starturl/nodename"/></option>
 						</select>
 						<input type="button" value="Go"
 							onclick="if (this.form.gourl[this.form.gourl.selectedIndex].value && confirm(this.form.gourl[this.form.gourl.selectedIndex].text + '?')) return dropdown(this.form.gourl)" />
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/03/01 20:22:31	1.227.2.12
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/03/05 16:49:42	1.227.2.13
@@ -3410,6 +3410,7 @@
 		baseurl = '/luci/cluster/index_html'
 
 	try:
+		nodes = model.getNodes()
 		cluname = req['clustername']
 		if not cluname:
 			raise KeyError, 'is blank'
@@ -3426,9 +3427,11 @@
 			itemmap = {}
 			itemmap['name'] = item['name']
 
+			cur_node = None
 			if item['running'] == "true":
+				cur_node = item['nodename']
 				itemmap['running'] = "true"
-				itemmap['nodename'] = item['nodename']
+				itemmap['nodename'] = cur_node
 				itemmap['disableurl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + item['name'] + "&pagetype=" + SERVICE_STOP
 				itemmap['restarturl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + item['name'] + "&pagetype=" + SERVICE_RESTART
 			else:
@@ -3436,6 +3439,15 @@
 
 			itemmap['autostart'] = item['autostart']
 
+			starturls = list()
+			for node in nodes:
+				starturl = {}
+				if node.getName() != cur_node:
+					starturl['nodename'] = node.getName()
+					starturl['url'] = baseurl + '?' + 'clustername=' + cluname +'&servicename=' + item['name'] + '&pagetype=' + SERVICE_START + '&nodename=' + node.getName()
+					starturls.append(starturl)
+			itemmap['links'] = starturls
+
 			try:
 				svc = model.retrieveServiceByName(item['name'])
 				itemmap['cfgurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + SERVICE



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-02-15 22:44 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-02-15 22:44 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2007-02-15 22:44:03

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: ModelBuilder.py cluster_adapters.py 

Log message:
	Support modifying cluster totem parameters

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.188&r2=1.189
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ModelBuilder.py.diff?cvsroot=cluster&r1=1.23&r2=1.24
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.238&r2=1.239

--- conga/luci/cluster/form-macros	2007/02/14 15:04:34	1.188
+++ conga/luci/cluster/form-macros	2007/02/15 22:44:02	1.189
@@ -755,6 +755,16 @@
 
 						<tr class="systemsTable">
 							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#send_join', 55, 65);">Maximum time to wait before sending a join message</a> (ms)
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10" name="send_join"
+									tal:attributes="value string:0" />
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
 								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#consensus', 55, 65);">Consensus Timeout</a> (ms)
 							</td>
 							<td class="systemsTable">
--- conga/luci/site/luci/Extensions/ModelBuilder.py	2007/02/15 18:55:34	1.23
+++ conga/luci/site/luci/Extensions/ModelBuilder.py	2007/02/15 22:44:02	1.24
@@ -813,6 +813,12 @@
   def getGULMPtr(self):
     return self.GULM_ptr
 
+  def getCMANPtr(self):
+    return self.CMAN_ptr
+
+  def getTotemPtr(self):
+    return self.TOTEM_ptr
+
   def getLockServer(self, name):
     children = self.GULM_ptr.getChildren()
     for child in children:
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/02/13 19:50:58	1.238
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/02/15 22:44:02	1.239
@@ -34,6 +34,7 @@
 from clusterOS import resolveOSType
 from Fence import Fence
 from Method import Method
+from Totem import Totem
 from Device import Device
 from FenceHandler import validateNewFenceDevice, FENCE_OPTS, validateFenceDevice, validate_fenceinstance
 from GeneralError import GeneralError
@@ -1183,6 +1184,376 @@
 			luci_log.debug_verbose('unable to update general properties: %s' % str(e))
 			errors.append('Unable to update the cluster configuration.')
 
+	try:
+		cluster_version = form['cluster_version'].strip()
+		if cluster_version != 'rhel5':
+			raise Exception, 'not rhel5'
+	except:
+		if len(errors) > 0:
+			return (False, {'errors': errors})
+		return (True, {})
+
+	totem = model.getTotemPtr()
+	if totem is None:
+		cp = model.getClusterPtr()
+		totem = Totem()
+		cp.addChild(totem)
+
+	if form.has_key('secauth'):
+		totem.addAttribute('secauth', '1')
+	else:
+		totem.addAttribute('secauth', '0')
+
+	try:
+		rrp_mode = form['rrp_mode'].strip().lower()
+		if not rrp_mode:
+			raise KeyError, 'rrp_mode'
+		if rrp_mode != 'none' and rrp_mode != 'active' and 'rrp_mode' != 'passive':
+			raise Exception, '%s is an invalid value for redundant ring protocol mode' % rrp_mode
+		totem.addAttribute('rrp_mode', str(rrp_mode))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('rrp_mode')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		net_mtu = form['net_mtu'].strip()
+		if not net_mtu:
+			raise KeyError, 'net_mtu'
+		net_mtu = int(net_mtu)
+		if net_mtu < 1:
+			raise ValueError, '%d is an invalid value for network MTU' % net_mtu
+		totem.addAttribute('net_mtu', str(net_mtu))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('net_mtu')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		threads = form['threads'].strip()
+		if not threads:
+			raise KeyError, 'threads'
+		threads = int(threads)
+		if threads < 0:
+			raise ValueError, '%d is an invalid value for number of threads' % threads
+		totem.addAttribute('threads', str(threads))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('threads')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		vsftype = form['vsftype'].strip().lower()
+		if not vsftype:
+			raise KeyError, 'vsftype'
+		if vsftype != 'none' and vsftype != 'ykd':
+			raise ValueError, '%s is an invalid value for virtual synchrony type' % vsftype
+		totem.addAttribute('vsftype', str(vsftype))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('vsftype')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		token = form['token'].strip()
+		if not token:
+			raise KeyError, 'token'
+		token = int(token)
+		if token < 1:
+			raise ValueError, '%d is an invalid value for token timeout' % token
+		totem.addAttribute('token', str(token))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('token')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		token_retransmit = form['token_retransmit'].strip()
+		if not token_retransmit:
+			raise KeyError, 'token_retransmit'
+		token_retransmit = int(token_retransmit)
+		if token_retransmit < 1:
+			raise ValueError, '%d is an invalid value for token retransmit' % token_retransmit
+		totem.addAttribute('token_retransmit', str(token_retransmit))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('token_retransmit')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		hold = form['hold'].strip()
+		if not hold:
+			raise KeyError, 'hold'
+		hold = int(hold)
+		if hold < 1:
+			raise ValueError, '%d is not a valid value for hold token timeout' % hold
+		totem.addAttribute('hold', str(hold))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('hold')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		retransmits_before_loss = form['retransmits_before_loss'].strip()
+		if not retransmits_before_loss:
+			raise KeyError, 'retransmits_before_loss'
+		retransmits_before_loss = int(retransmits_before_loss)
+		if retransmits_before_loss < 1:
+			raise ValueError, '%d is an invalid value for number of retransmits before loss' % retransmits_before_loss
+		totem.addAttribute('retransmits_before_loss', str(retransmits_before_loss))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('retransmits_before_loss')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		join = form['join'].strip()
+		if not join:
+			raise KeyError, 'join'
+		join = int(join)
+		if join < 1:
+			raise ValueError, '%d is an invalid value for join timeout' % join
+		totem.addAttribute('join', str(join))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('join')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		send_join = form['send_join'].strip()
+		if not send_join:
+			raise KeyError, 'send_join'
+		send_join = int(send_join)
+		if send_join < 0:
+			raise ValueError, '%d is an invalid value for time to wait before sending a join message' % send_join
+		totem.addAttribute('send_join', str(send_join))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('send_join')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		consensus = form['consensus'].strip()
+		if not consensus:
+			raise KeyError, 'consensus'
+		consensus = int(consensus)
+		if consensus < 1:
+			raise ValueError, '%d is an invalid value for consensus timeout' % consensus
+		totem.addAttribute('consensus', str(consensus))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('consensus')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		merge = form['merge'].strip()
+		if not merge:
+			raise KeyError, 'merge'
+		merge = int(merge)
+		if merge < 1:
+			raise ValueError, '%d is an invalid value for merge detection timeout' % merge
+		totem.addAttribute('merge', str(merge))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('merge')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		downcheck = form['downcheck'].strip()
+		if not downcheck:
+			raise KeyError, 'downcheck'
+		downcheck = int(downcheck)
+		if downcheck < 1:
+			raise ValueError, '%d is an invalid value for interface down check timeout' % downcheck
+		totem.addAttribute('downcheck', str(downcheck))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('downcheck')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		fail_to_recv_const = form['fail_to_recv_const'].strip()
+		if not fail_to_recv_const:
+			raise KeyError, 'fail_to_recv_const'
+		fail_to_recv_const = int(fail_to_recv_const)
+		if fail_to_recv_const < 1:
+			raise ValueError, '%d is an invalid value for fail to receive constant' % fail_to_recv_const
+		totem.addAttribute('fail_to_recv_const', str(fail_to_recv_const))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('fail_to_recv_const')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		seqno_unchanged_const = form['seqno_unchanged_const'].strip()
+		if not seqno_unchanged_const:
+			raise KeyError, 'seqno_unchanged_const'
+		seqno_unchanged_const = int(seqno_unchanged_const)
+		if seqno_unchanged_const < 1:
+			raise ValueError, '%d is an invalid value for rotations with no multicast traffic before merge detection timeout started' % seqno_unchanged_const
+		totem.addAttribute('seqno_unchanged_const', str(seqno_unchanged_const))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('seqno_unchanged_const')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		heartbeat_failures_allowed = form['heartbeat_failures_allowed'].strip()
+		if not heartbeat_failures_allowed:
+			raise KeyError, 'heartbeat_failures_allowed'
+		heartbeat_failures_allowed = int(heartbeat_failures_allowed)
+		if heartbeat_failures_allowed < 0:
+			raise ValueError, '%d is an invalid value for number of heartbeat failures allowed' % heartbeat_failures_allowed
+		totem.addAttribute('heartbeat_failures_allowed', str(heartbeat_failures_allowed))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('heartbeat_failures_allowed')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		max_network_delay = form['max_network_delay'].strip()
+		if not max_network_delay:
+			raise KeyError, 'max_network_delay'
+		max_network_delay = int(max_network_delay)
+		if max_network_delay < 1:
+			raise ValueError, '%d is an invalid value for maximum network delay' % max_network_delay
+		totem.addAttribute('max_network_delay', str(max_network_delay))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('max_network_delay')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		window_size = form['window_size'].strip()
+		if not window_size:
+			raise KeyError, 'window_size'
+		window_size = int(window_size)
+		if window_size < 1:
+			raise ValueError, '%d is an invalid value for window size' % window_size
+		totem.addAttribute('window_size', str(window_size))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('window_size')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		max_messages = form['max_messages'].strip()
+		if not max_messages:
+			raise KeyError, 'max_messages'
+		max_messages = int(max_messages)
+		if max_messages < 1:
+			raise ValueError, '%d is an invalid value for maximum messages' % max_messages
+		totem.addAttribute('max_messages', str(max_messages))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('max_messages')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		rrp_problem_count_timeout = form['rrp_problem_count_timeout'].strip()
+		if not rrp_problem_count_timeout:
+			raise KeyError, 'rrp_problem_count_timeout'
+		rrp_problem_count_timeout = int(rrp_problem_count_timeout)
+		if rrp_problem_count_timeout < 1:
+			raise ValueError, '%d is an invalid value for RRP problem count timeout' % rrp_problem_count_timeout
+		totem.addAttribute('rrp_problem_count_timeout', str(rrp_problem_count_timeout))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('rrp_problem_count_timeout')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		rrp_problem_count_threshold = form['rrp_problem_count_threshold'].strip()
+		if not rrp_problem_count_threshold:
+			raise KeyError, 'rrp_problem_count_threshold'
+		rrp_problem_count_threshold = int(rrp_problem_count_threshold)
+		if rrp_problem_count_threshold < 1:
+			raise ValueError, '%d is an invalid value for RRP problem count threshold' % rrp_problem_count_threshold
+		totem.addAttribute('rrp_problem_count_threshold', str(rrp_problem_count_threshold))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('rrp_problem_count_threshold')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
+	try:
+		rrp_token_expired_timeout = form['rrp_token_expired_timeout'].strip()
+		if not rrp_token_expired_timeout:
+			raise KeyError, 'rrp_token_expired_timeout'
+		rrp_token_expired_timeout = int(rrp_token_expired_timeout)
+		if rrp_token_expired_timeout < 1:
+			raise ValueError, '%d is an invalid value for RRP token expired timeout' % rrp_token_expired_timeout
+		totem.addAttribute('rrp_token_expired_timeout', str(rrp_token_expired_timeout))
+	except KeyError, e:
+		try:
+			totem.removeAttribute('rrp_token_expired_timeout')
+		except:
+			pass
+	except Exception, e:
+		errors.append(str(e))
+
 	if len(errors) > 0:
 		return (False, {'errors': errors})
 	return (True, {})
@@ -2200,7 +2571,7 @@
 	try:
 		vm_path = request.form['vmpath'].strip()
 		if not vm_path:
-			raise 'blank'
+			raise Exception, 'blank'
 	except Exception, e:
 		luci_log.debug_verbose('validateVM1: no vm path: %s' % str(e))
 		errors.append('No path to the virtual machine configuration file was given.')



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-02-08  3:46 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-02-08  3:46 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL4
Changes by:	rmccabe at sourceware.org	2007-02-08 03:46:51

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: FailoverDomain.py ModelBuilder.py 
	                           cluster_adapters.py 
Added files:
	luci/cluster   : validate_fdom.js 
	luci/site/luci/Extensions: FenceXVMd.py 

Log message:
	Support for adding and editing failover domains (bz 215014)

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/validate_fdom.js.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=NONE&r2=1.3.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.176.2.4&r2=1.176.2.5
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FenceXVMd.py.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FailoverDomain.py.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.1&r2=1.1.4.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ModelBuilder.py.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.19.2.2&r2=1.19.2.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.227.2.3&r2=1.227.2.4

/cvs/cluster/conga/luci/cluster/validate_fdom.js,v  -->  standard output
revision 1.3.2.1
--- conga/luci/cluster/validate_fdom.js
+++ -	2007-02-08 03:46:51.751330000 +0000
@@ -0,0 +1,40 @@
+function fdom_set_prioritized(form, state) {
+	var prilist = form.getElementsByTagName('input');
+	if (!prilist)
+		return (-1);
+	for (var i = 0 ; i < prilist.length ; i++) {
+		if (prilist[i].type == 'text' && prilist[i].className == 'fdom_priority')
+			prilist[i].disabled = !state || !form[prilist[i].id][0].checked;
+	}
+}
+
+function fdom_set_member(form, name, state) {
+	var prioritized = document.getElementById('prioritized');
+	if (!prioritized)
+		return (-1);
+	prioritized = prioritized.checked;
+	var member_pri_elem = document.getElementById(name);
+	if (!member_pri_elem)
+		return (-1);
+	member_pri_elem.disabled = !prioritized || !state;
+}
+
+function validate_add_fdom(form) {
+	var errors = new Array();
+
+	if (!form.name || str_is_blank(form.name.value)) {
+		set_form_err(form.name);
+		errors.append('No name was given for this failover domain.');
+	} else
+		clr_form_err(form.name);
+
+	if (error_dialog(errors))
+		return (-1);
+
+	var confirm_msg = 'Add this failover domain?';
+	if (form.oldname)
+		confirm_msg = 'Update this failover domain?';
+
+	if (confirm(confirm_msg))
+		form.submit();
+}
--- conga/luci/cluster/form-macros	2007/02/07 18:30:53	1.176.2.4
+++ conga/luci/cluster/form-macros	2007/02/08 03:46:51	1.176.2.5
@@ -4322,23 +4322,117 @@
 	</div>
 </div>
 
+<tal:block metal:define-macro="fdom-macro">
+<script type="text/javascript"
+	src="/luci/homebase/homebase_common.js">
+</script>
+<script type="text/javascript"
+	src="/luci/cluster/validate_fdom.js">
+</script>
+
+<form method="post" action="">
+	<input type="hidden" name="clustername"
+		tal:attributes="value request/clustername | nothing" />
+	<input type="hidden" name="pagetype"
+		tal:attributes="value request/pagetype | nothing" />
+	<input type="hidden" name="oldname"
+		tal:condition="exists: fdom/name"
+		tal:attributes="value fdom/name | nothing" />
+	
+	<table class="systemsTable" width="100%">
+		<thead class="systemsTable">
+			<tr class="systemsTable">
+				<td><strong>Failover Domain Name</strong></td>
+				<td>
+					<input type="text" name="name"
+						tal:attributes="value fdom/name | nothing" />
+				</td>
+			</tr>
+			<tr class="systemsTable">
+				<td>Prioritized</td>
+				<td>
+					<input type="checkbox" name="prioritized" id="prioritized"
+						onchange="fdom_set_prioritized(this.form, this.checked)"
+						tal:attributes="checked python: (fdom and 'prioritized' in fdom and fdom['prioritized'] == '1') and 'checked' or ''" />
+				</td>
+			</tr>
+			<tr class="systemsTable">
+				<td>Restrict failover to this domain's members</td>
+				<td>
+					<input type="checkbox" name="restricted"
+						tal:attributes="checked python: (fdom and 'restricted' in fdom and fdom['restricted'] == '1') and 'checked' or ''" />
+				</td>
+			</tr>
+			<tr class="systemsTable">
+				<td class="systemsTable" colspan="2">
+					<p></p>
+					<p class="reshdr">Failover domain membership</p>
+				</td>
+			</tr>
+		</thead>
+
+		<tfoot class="systemsTable">
+			<tr class="systemsTable"><td>
+				<div class="hbSubmit">
+					<input type="button" name="add" value="Submit"
+						onclick="validate_add_fdom(this.form)" />
+				</div>
+			</td></tr>
+		</tfoot>
+
+		<tbody width="60%">
+			<tr class="systemsTable">
+				<th class="systemsTable" width="33%">Node</th>
+				<th class="systemsTable" width="10%">Member</th>
+				<th class="systemsTable" width="57%">Priority</th>
+			</tr>
+			<tal:block tal:repeat="n python:here.getnodes(modelb)">
+				<tr class="systemsTable">
+					<td class="systemsTable" width="33%">
+						<tal:block tal:replace="n" />
+					<td class="systemsTable" width="10%">
+						<input type="checkbox"
+							onchange="fdom_set_member(this.form, this.name, this.checked)"
+							tal:attributes="
+								checked python: ('members' in fdom and n in fdom['members']) and 'checked' or '';
+								name n" />
+					</td>
+					<td class="systemsTable" width="75%">
+						<input type="text" class="fdom_priority"
+							tal:attributes="
+								id n;
+								name python: '__PRIORITY__' + n;
+								value python: ('members' in fdom and n in fdom['members'] and 'priority' in fdom['members'][n]) and fdom['members'][n]['priority'] or '1';
+								disabled python: (not fdom or not 'prioritized' in fdom or fdom['prioritized'] != '1' or not 'members' in fdom or not n in fdom['members']) and 'disabled' or ''" />
+					</td>
+				</tr>
+			</tal:block>
+		</tbody>
+	</table>
+</form>
+
+</tal:block>
+
 <div metal:define-macro="fdomadd-form">
 	<script type="text/javascript">
 		set_page_title('Luci ??? cluster ??? failover domains ??? Add a failover domain');
 	</script>
-	<h2>Failover Domain Add Form</h2>
-  <tal:block tal:define="allnodes python:here.getFdomNodes(request)"/>
+
+	<h2>Add a Failover Domain</h2>
+	<tal:block metal:use-macro="here/form-macros/macros/fdom-macro" />
 </div>
 
 <div metal:define-macro="fdomconfig-form">
 	<script type="text/javascript">
 		set_page_title('Luci ??? cluster ??? failover domains ??? Configure a failover domain');
 	</script>
-	<h2>Failover Domain Configuration Form</h2>
 </div>
 
 <div metal:define-macro="fdom-form">
 	<h2>Failover Domain Form</h2>
+	<tal:block tal:define="fdom python:here.getFdomInfo(modelb, request)">
+		<tal:block metal:use-macro="here/form-macros/macros/fdom-macro" />
+	</tal:block>
 </div>
 
 <div metal:define-macro="fdomprocess-form">
/cvs/cluster/conga/luci/site/luci/Extensions/FenceXVMd.py,v  -->  standard output
revision 1.1.2.1
--- conga/luci/site/luci/Extensions/FenceXVMd.py
+++ -	2007-02-08 03:46:51.952546000 +0000
@@ -0,0 +1,14 @@
+import string
+from TagObject import TagObject
+
+TAG_NAME = "fence_xvmd"
+
+class FenceXVMd(TagObject):
+  def __init__(self):
+    TagObject.__init__(self)
+    self.TAG_NAME = TAG_NAME
+    #Have autostart set by default
+
+  def getProperties(self):
+    stringbuf = ""
+    return stringbuf
--- conga/luci/site/luci/Extensions/FailoverDomain.py	2006/05/30 20:17:21	1.1
+++ conga/luci/site/luci/Extensions/FailoverDomain.py	2007/02/08 03:46:51	1.1.4.1
@@ -22,7 +22,7 @@
   def getProperties(self):
     stringbuf = ""
     restricted_status = ""
-    ordereded_status = ""
+    ordered_status = ""
     string_restricted = ""
     string_ordered = ""
     string_num_kin = ""
--- conga/luci/site/luci/Extensions/ModelBuilder.py	2007/02/07 17:02:18	1.19.2.2
+++ conga/luci/site/luci/Extensions/ModelBuilder.py	2007/02/08 03:46:51	1.19.2.3
@@ -627,10 +627,17 @@
       return list()
     else:
       return self.failoverdomains_ptr.getChildren()
-        
+
   def getFailoverDomainPtr(self):
     return self.failoverdomains_ptr
 
+  def getFailoverDomainByName(self, fdom_name):
+    fdoms = self.getFailoverDomains()
+    for i in fdoms:
+      if i.getName() == fdom_name:
+        return i
+    return None
+
   def getFailoverDomainsForNode(self, nodename):
     matches = list()
     faildoms = self.getFailoverDomains()
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/02/07 21:30:33	1.227.2.3
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/02/08 03:46:51	1.227.2.4
@@ -11,6 +11,8 @@
 from Ip import Ip
 from Clusterfs import Clusterfs
 from Fs import Fs
+from FailoverDomain import FailoverDomain
+from FailoverDomainNode import FailoverDomainNode
 from RefObject import RefObject
 from ClusterNode import ClusterNode
 from NFSClient import NFSClient
@@ -2009,6 +2011,143 @@
 
 	response.redirect(request['URL'] + "?pagetype=" + NODE + "&clustername=" + clustername + '&nodename=' + nodename + '&busyfirst=true')
 
+def validateFdom(self, request):
+	errors = list()
+
+	try:
+		model = request.SESSION.get('model')
+		if not model:
+			raise Exception, 'no model'
+	except Exception, e:
+		luci_log.debug_verbose('validateFdom0: no model: %s' % str(e))
+		return (False, {'errors': [ 'Unable to retrieve cluster information.' ]})
+
+	prioritized = False
+	try:
+		prioritized = request.form.has_key('prioritized')
+	except:
+		prioritized = False
+
+	restricted = False
+	try:
+		restricted = request.form.has_key('restricted')
+	except:
+		restricted = False
+
+	clustername = None
+	try:
+		clustername = request.form['clustername'].strip()
+		if not clustername:
+			raise Exception, 'blank'
+	except:
+		try:
+			clustername = model.getClusterName()
+			if not clustername:
+				raise Exception, 'blank'
+		except:
+			clustername = None
+
+	if not clustername:
+		errors.append('Unable to determine this cluster\'s name.')
+
+	try:
+		name = request.form['name'].strip()
+		if not name:
+			raise Exception, 'blank'
+	except Exception, e:
+		errors.append('No name was given for this failover domain.')
+		luci_log.debug_verbose('validateFdom0: %s' % str(e))
+
+	oldname = None
+	try:
+		oldname = request.form['oldname'].strip()
+		if not oldname:
+			raise Exception, 'blank'
+	except:
+		pass
+
+	if oldname is None or oldname != name:
+		if model.getFailoverDomainByName(name) is not None:
+			errors.append('A failover domain named \"%s\" already exists.' % name)
+
+	fdom = None
+	if oldname is not None:
+		fdom = model.getFailoverDomainByName(oldname)
+		if fdom is None:
+			luci_log.debug_verbose('validateFdom1: No fdom named %s exists' % oldname)
+			errors.append('No failover domain named \"%s" exists.' % oldname)
+		else:
+			fdom.addAttribute('name', name)
+			fdom.children = list()
+	else:
+		fdom = FailoverDomain()
+		fdom.addAttribute('name', name)
+
+	if fdom is None or len(errors) > 0:
+		return (False, {'errors': errors })
+
+	if prioritized:
+		fdom.addAttribute('ordered', '1')
+	else:
+		fdom.addAttribute('ordered', '0')
+
+	if restricted:
+		fdom.addAttribute('restricted', '1')
+	else:
+		fdom.addAttribute('restricted', '0')
+
+	cluster_nodes = map(lambda x: str(x.getName()), model.getNodes())
+
+	for i in cluster_nodes:
+		if request.form.has_key(i):
+			fdn = FailoverDomainNode()
+			fdn.addAttribute('name', i)
+			if prioritized:
+				priority = 1
+				try:
+					priority = int(request.form['__PRIORITY__' + i].strip())
+					if priority < 1:
+						priority = 1
+				except Exception, e:
+					priority = 1
+				fdn.addAttribute('priority', str(priority))
+			fdom.addChild(fdn)
+
+	try:
+		fdom_ptr = model.getFailoverDomainPtr()
+		if not oldname:
+			fdom_ptr.addChild(fdom)
+		model.setModified(True)
+		conf = str(model.exportModelAsString())
+	except Exception, e:
+		luci_log.debug_verbose('validateFdom2: %s' % str(e))
+		errors.append('Unable to update the cluster configuration.')
+
+	if len(errors) > 0:
+		return (False, {'errors': errors })
+
+	rc = getRicciAgent(self, clustername)
+	if not rc:
+		luci_log.debug_verbose('validateFdom3: unable to find a ricci agent for cluster %s' % clustername)
+		return (False, {'errors': ['Unable to find a ricci agent for the %s cluster' % clustername ]})
+	ragent = rc.hostname()
+
+	batch_number, result = setClusterConf(rc, conf)
+	if batch_number is None or result is None:
+		luci_log.debug_verbose('validateFdom4: missing batch and/or result')
+		return (False, {'errors': [ 'An error occurred while constructing the new cluster configuration.' ]})
+
+	try:
+		if oldname:
+			set_node_flag(self, clustername, ragent, str(batch_number), FDOM, 'Updating failover domain \"%s\"' % oldname)
+		else:
+			set_node_flag(self, clustername, ragent, str(batch_number), FDOM_ADD, 'Creating failover domain \"%s\"' % name)
+	except Exception, e:
+		luci_log.debug_verbose('validateFdom5: failed to set flags: %s' % str(e))
+
+	response = request.RESPONSE
+	response.redirect(request['URL'] + "?pagetype=" + FDOM + "&clustername=" + clustername + '&fdomname=' + name + '&busyfirst=true')
+
 def validateVM(self, request):
 	errors = list()
 
@@ -2122,6 +2261,8 @@
 	24: validateServiceAdd,
 	31: validateResourceAdd,
 	33: validateResourceAdd,
+	41: validateFdom,
+	44: validateFdom,
 	51: validateFenceAdd,
 	54: validateFenceEdit,
 	55: validateDaemonProperties,
@@ -2247,12 +2388,11 @@
   return dummynode
 
 def getnodes(self, model):
-  mb = model
-  nodes = mb.getNodes()
-  names = list()
-  for node in nodes:
-    names.append(node.getName())
-  return names
+	try:
+		return map(lambda x: str(x.getName()), model.getNodes())
+	except Exception, e:
+		luci_log.debug_verbose('getnodes0: %s' % str(e))
+	return []
 
 def createCluConfigTree(self, request, model):
   dummynode = {}
@@ -3370,6 +3510,39 @@
 	response = req.RESPONSE
 	response.redirect(req['URL'] + "?pagetype=" + SERVICE_LIST + "&clustername=" + cluname + '&busyfirst=true')
 
+def getFdomInfo(self, model, request):
+	fhash = {}
+	fhash['members'] = {}
+
+	try:
+		fdom = model.getFailoverDomainByName(request['fdomname'])
+	except Exception, e:
+		luci_log.debug_verbose('getFdomInfo0: %s' % str(e))
+		return fhash
+
+	fhash['name'] = fdom.getName()
+
+	ordered_attr = fdom.getAttribute('ordered')
+	if ordered_attr is not None and (ordered_attr == "true" or ordered_attr == "1"):
+		fhash['prioritized'] = '1'
+	else:
+		fhash['prioritized'] = '0'
+
+	restricted_attr = fdom.getAttribute('restricted')
+	if restricted_attr is not None and (restricted_attr == "true" or restricted_attr == "1"):
+		fhash['restricted'] = '1'
+	else:
+		fhash['restricted'] = '0'
+
+	nodes = fdom.getChildren()
+	for node in nodes:
+		try:
+			priority = node.getAttribute('priority')
+		except:
+			priority = '1'
+		fhash['members'][node.getName()] = { 'priority': priority }
+	return fhash
+
 def getFdomsInfo(self, model, request, clustatus):
   slist = list()
   nlist = list()
@@ -3386,7 +3559,7 @@
   for fdom in fdoms:
     fdom_map = {}
     fdom_map['name'] = fdom.getName()
-    fdom_map['cfgurl'] = baseurl + "?pagetype=" + FDOM_LIST + "&clustername=" + clustername
+    fdom_map['cfgurl'] = baseurl + "?pagetype=" + FDOM + "&clustername=" + clustername + '&fdomname=' + fdom.getName()
     ordered_attr = fdom.getAttribute('ordered')
     restricted_attr = fdom.getAttribute('restricted')
     if ordered_attr is not None and (ordered_attr == "true" or ordered_attr == "1"):



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-02-07 16:55 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-02-07 16:55 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2007-02-07 16:55:16

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: ModelBuilder.py cluster_adapters.py 
Added files:
	luci/site/luci/Extensions: FenceXVMd.py 

Log message:
	- Support for adding and deleting a fence_xvmd tag from cluster.conf

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.177&r2=1.178
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FenceXVMd.py.diff?cvsroot=cluster&r1=NONE&r2=1.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ModelBuilder.py.diff?cvsroot=cluster&r1=1.20&r2=1.21
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.229&r2=1.230

--- conga/luci/cluster/form-macros	2007/02/05 19:52:44	1.177
+++ conga/luci/cluster/form-macros	2007/02/07 16:55:15	1.178
@@ -943,6 +943,13 @@
 							tal:attributes="value clusterinfo/pjd" />
 					</td>
 				</tr>
+				<tr class="systemsTable">
+					<td class="systemsTable">Run XVM fence daemon</td>
+					<td class="systemsTable">
+						<input type="checkbox" name="run_xvmd"
+							tal:attributes="checked python: ('fence_xvmd' in clusterinfo and clusterinfo['fence_xvmd']) and 'checked' or ''" />
+					</td>
+				</tr>
 			</tbody>
 
 			<tfoot class="systemsTable">
/cvs/cluster/conga/luci/site/luci/Extensions/FenceXVMd.py,v  -->  standard output
revision 1.1
--- conga/luci/site/luci/Extensions/FenceXVMd.py
+++ -	2007-02-07 16:55:16.669210000 +0000
@@ -0,0 +1,14 @@
+import string
+from TagObject import TagObject
+
+TAG_NAME = "fence_xvmd"
+
+class FenceXVMd(TagObject):
+  def __init__(self):
+    TagObject.__init__(self)
+    self.TAG_NAME = TAG_NAME
+    #Have autostart set by default
+
+  def getProperties(self):
+    stringbuf = ""
+    return stringbuf
--- conga/luci/site/luci/Extensions/ModelBuilder.py	2007/02/05 19:52:44	1.20
+++ conga/luci/site/luci/Extensions/ModelBuilder.py	2007/02/07 16:55:15	1.21
@@ -27,6 +27,7 @@
 from Samba import Samba
 from Multicast import Multicast
 from FenceDaemon import FenceDaemon
+from FenceXVMd import FenceXVMd
 from Netfs import Netfs
 from Clusterfs import Clusterfs
 from Resources import Resources
@@ -56,6 +57,7 @@
            'rm':Rm,
            'service':Service,
            'vm':Vm,
+           'fence_xvmd':FenceXVMd,
            'resources':Resources,
            'failoverdomain':FailoverDomain,
            'failoverdomains':FailoverDomains,
@@ -85,6 +87,7 @@
 FENCEDAEMON_PTR_STR="fence_daemon"
 SERVICE="service"
 VM="vm"
+FENCE_XVMD_STR="fence_xvmd"
 GULM_TAG_STR="gulm"
 MCAST_STR="multicast"
 CMAN_PTR_STR="cman"
@@ -119,6 +122,7 @@
     self.isModified = False
     self.quorumd_ptr = None
     self.usesQuorumd = False
+    self.fence_xvmd_ptr = None
     self.unusual_items = list()
     self.isVirtualized = False
     if mcast_addr == None:
@@ -217,6 +221,8 @@
         self.CMAN_ptr = new_object
       elif parent_node.nodeName == MCAST_STR:
         self.usesMulticast = True
+      elif parent_node.nodeName == FENCE_XVMD_STR:
+        self.fence_xvmd_ptr = new_object
 
     else:
       return None
@@ -591,6 +597,22 @@
 
     raise GeneralError('FATAL',"Couldn't find VM name %s in current list" % name)
 
+  def hasFenceXVM(self):
+    return self.fence_xvmd_ptr is not None
+
+  # Right now the fence_xvmd tag is empty, but allow the object
+  # to be passed in case attributes are added in the future.
+  def addFenceXVM(self, obj):
+    if self.fence_xvmd_ptr is not None:
+      self.cluster_ptr.removeChild(self.fence_xvmd_ptr)
+    self.cluster_ptr.addChild(obj)
+    self.fence_xvmd_ptr = obj
+
+  def delFenceXVM(self):
+    if self.fence_xvmd_ptr is not None:
+      self.cluster_ptr.removeChild(self.fence_xvmd_ptr)
+      self.fence_xvmd_ptr = None
+
   def getFenceDevices(self):
     if self.fencedevices_ptr == None:
       return list()
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/02/05 19:56:18	1.229
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/02/07 16:55:15	1.230
@@ -24,6 +24,7 @@
 from Tomcat5 import Tomcat5
 from OpenLDAP import OpenLDAP
 from Vm import Vm
+from FenceXVMd import FenceXVMd
 from Script import Script
 from Samba import Samba
 from QuorumD import QuorumD
@@ -1172,6 +1173,18 @@
 	except ValueError, e:
 		errors.append('Invalid post join delay: %s' % str(e))
 
+	run_xvmd = False
+	try:
+		run_xvmd = form.has_key('run_xvmd')
+	except:
+		pass
+
+	if run_xvmd is True and not model.hasFenceXVM():
+		fenceXVMd = FenceXVMd()
+		model.addFenceXVM(fenceXVMd)
+	elif not run_xvmd:
+		model.delFenceXVM()
+
 	try:
 		fd = model.getFenceDaemonPtr()
 		old_pj_delay = fd.getPostJoinDelay()
@@ -3513,6 +3526,7 @@
   #new cluster params - if rhel5
   #-------------
 
+  clumap['fence_xvmd'] = model.hasFenceXVM()
   gulm_ptr = model.getGULMPtr()
   if not gulm_ptr:
     #Fence Daemon Props



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-02-02  4:34 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-02-02  4:34 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2007-02-02 04:34:35

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 

Log message:
	Fix the display of virtual services

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.175&r2=1.176
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.226&r2=1.227

--- conga/luci/cluster/form-macros	2007/02/02 01:12:21	1.175
+++ conga/luci/cluster/form-macros	2007/02/02 04:34:35	1.176
@@ -3730,6 +3730,9 @@
 						href svc/cfgurl;
 						class python: 'cluster service ' + (running and 'running' or 'stopped')"
 						tal:content="svc/name" />
+					<tal:block tal:condition="exists:svc/virt">
+						(virtual service)
+					</tal:block>
 				</td>
 
 				<td class="cluster service service_action">
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/02/02 00:11:05	1.226
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/02/02 04:34:35	1.227
@@ -2387,7 +2387,6 @@
     svcfg['currentItem'] = False
 
   services = model.getServices()
-  vms = model.getVMs()
   serviceable = list()
   for service in services:
     servicename = service.getName()
@@ -2410,6 +2409,7 @@
 
     serviceable.append(svc)
 
+  vms = model.getVMs()
   for vm in vms:
     name = vm.getName()
     svc = {}
@@ -3038,6 +3038,10 @@
 			vals['name'] = node.getAttribute('name')
 			vals['nodename'] = node.getAttribute('nodename')
 			vals['running'] = node.getAttribute('running')
+			try:
+				vals['is_vm'] = node.getAttribute('vm').lower() == 'true'
+			except:
+				vals['is_vm'] = False
 			vals['failed'] = node.getAttribute('failed')
 			vals['autostart'] = node.getAttribute('autostart')
 			results.append(vals)
@@ -3074,13 +3078,19 @@
 				itemmap['running'] = "true"
 				itemmap['nodename'] = item['nodename']
 			itemmap['autostart'] = item['autostart']
-			itemmap['cfgurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + SERVICE
-			itemmap['delurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + SERVICE_DELETE
 
 			try:
 				svc = model.retrieveServiceByName(item['name'])
+				itemmap['cfgurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + SERVICE
+				itemmap['delurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + SERVICE_DELETE
 			except:
-				continue
+				try:
+					svc = model.retrieveVMsByName(item['name'])
+					itemmap['virt'] = True
+					itemmap['cfgurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + VM_CONFIG 
+					itemmap['delurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + VM_CONFIG
+				except:
+					continue
 			dom = svc.getAttribute("domain")
 			if dom is not None:
 				itemmap['faildom'] = dom
@@ -3667,7 +3677,12 @@
       svcname = svc['name']
       svc_dict['name'] = svcname
       svc_dict['srunning'] = svc['running']
-      svcurl = baseurl + "?" + PAGETYPE + "=" + SERVICE + "&" + CLUNAME + "=" + clustername + "&servicename=" + svcname
+
+      if svc.has_key('is_vm') and svc['is_vm'] is True:
+        target_page = VM_CONFIG
+      else:
+        target_page = SERVICE
+      svcurl = baseurl + "?" + PAGETYPE + "=" + target_page + "&" + CLUNAME + "=" + clustername + "&servicename=" + svcname
       svc_dict['servicename'] = svcname
       svc_dict['svcurl'] = svcurl
       svc_dict_list.append(svc_dict)



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-02-02  0:11 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-02-02  0:11 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2007-02-02 00:11:05

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 

Log message:
	Disallow multicast configuration options for GULM clusters

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.172&r2=1.173
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.225&r2=1.226

--- conga/luci/cluster/form-macros	2007/02/01 23:48:51	1.172
+++ conga/luci/cluster/form-macros	2007/02/02 00:11:05	1.173
@@ -550,18 +550,13 @@
 				class python: 'configTab' + (configTabNum == 2 and ' configTabActive' or '');
 			">Fence</a>
 		</li>
-		<li class="configTab">
+		<li class="configTab"
+			tal:condition="not:clusterinfo/gulm">
 			<a tal:attributes="
 				href clusterinfo/multicast_url | nothing;
 				class python: 'configTab' + (configTabNum == 3 and ' configTabActive' or '');
 			">Multicast</a>
 		</li>
-		<li class="configTab">
-			<a tal:attributes="
-				href clusterinfo/quorumd_url | nothing;
-				class python: 'configTab' + (configTabNum == 4 and ' configTabActive' or '');
-			">Quorum Partition</a>
-		</li>
 
 		<li class="configTab"
 			tal:condition="clusterinfo/gulm">
@@ -569,6 +564,13 @@
 				href clusterinfo/gulm_url | nothing;
 				class python: 'configTab' + (configTabNum == 5 and ' configTabActive' or '')">GULM</a>
 		</li>
+
+		<li class="configTab">
+			<a tal:attributes="
+				href clusterinfo/quorumd_url | nothing;
+				class python: 'configTab' + (configTabNum == 4 and ' configTabActive' or '');
+			">Quorum Partition</a>
+		</li>
 	</ul>
 
 	<div id="configTabContent" tal:condition="python: configTabNum == 1">
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/02/01 23:48:51	1.225
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/02/02 00:11:05	1.226
@@ -906,6 +906,13 @@
 
 # rhel5 cluster version
 def validateMCastConfig(model, form):
+	try:
+		gulm_ptr = model.getGULMPtr()
+		if gulm_ptr:
+			return (False, {'errors': ['Multicast cannot be used with GULM locking.']})
+	except:
+		pass
+
 	errors = list()
 	try:
 		mcast_val = form['mcast'].strip().lower()
@@ -3506,24 +3513,25 @@
   clumap['pjd'] = pjd
   #post fail delay
   clumap['pfd'] = pfd
-  #-------------
-  #if multicast
-  multicast_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_MCAST_TAB
-  clumap['multicast_url'] = multicast_url
-  #mcast addr
-  is_mcast = model.isMulticast()
-  #clumap['is_mcast'] = is_mcast
-  if is_mcast:
-    clumap['mcast_addr'] = model.getMcastAddr()
-    clumap['is_mcast'] = "True"
-  else:
-    clumap['is_mcast'] = "False"
-    clumap['mcast_addr'] = "1.2.3.4"
 
-  #-------------
-  #GULM params (rhel4 only)
   gulm_ptr = model.getGULMPtr()
-  if gulm_ptr:
+  if not gulm_ptr:
+    #-------------
+    #if multicast
+    multicast_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_MCAST_TAB
+    clumap['multicast_url'] = multicast_url
+    #mcast addr
+    is_mcast = model.isMulticast()
+    if is_mcast:
+      clumap['mcast_addr'] = model.getMcastAddr()
+      clumap['is_mcast'] = "True"
+    else:
+      clumap['is_mcast'] = "False"
+      clumap['mcast_addr'] = "1.2.3.4"
+    clumap['gulm'] = False
+  else:
+    #-------------
+    #GULM params (rhel4 only)
     lockserv_list = list()
     clunodes = model.getNodes()
     gulm_lockservs = map(lambda x: x.getName(), gulm_ptr.getChildren())
@@ -3535,8 +3543,6 @@
     clumap['gulm'] = True
     clumap['gulm_url'] = prop_baseurl + PROPERTIES_TAB + '=' + PROP_GULM_TAB
     clumap['gulm_lockservers'] = lockserv_list
-  else:
-    clumap['gulm'] = False
 
   #-------------
   #quorum disk params



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-02-01 20:49 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-02-01 20:49 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2007-02-01 20:49:08

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: ModelBuilder.py cluster_adapters.py 
	                           conga_constants.py 

Log message:
	- implement deletion of a virtual machine service
	- don't increment the cluster configuration number needlessly

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.170&r2=1.171
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ModelBuilder.py.diff?cvsroot=cluster&r1=1.18&r2=1.19
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.223&r2=1.224
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/conga_constants.py.diff?cvsroot=cluster&r1=1.35&r2=1.36

--- conga/luci/cluster/form-macros	2007/02/01 20:27:33	1.170
+++ conga/luci/cluster/form-macros	2007/02/01 20:49:08	1.171
@@ -3748,7 +3748,7 @@
 		<tfoot class="systemsTable">
 			<tr class="systemsTable"><td colspan="2">
 				<div class="hbSubmit">
-					<input type="submit" value="Create Virtual Machine" />
+					<input type="submit" value="Create Virtual Machine Service" />
 				</div>
 			</td></tr>
 		</tfoot>
@@ -3784,13 +3784,13 @@
 	<table class="systemsTable">
 		<thead class="systemsTable">
 			<tr class="systemsTable"><td class="systemsTable">
-				<p class="reshdr">Properties for <tal:block tal:replace="vminfo/name | string:virtual machine"/></p>
+				<p class="reshdr">Properties for <tal:block tal:replace="vminfo/name | string:virtual machine service"/></p>
 			</td></tr>
 		<tfoot class="systemsTable">
 			<tr class="systemsTable"><td colspan="2">
 				<div class="hbSubmit">
-					<input name="submit" type="submit" value="Update Virtual Machine" />
-					<input name="delete" type="submit" value="Delete Virtual Machine" />
+					<input name="submit" type="submit" value="Update Virtual Machine Service" />
+					<input name="delete" type="submit" value="Delete Virtual Machine Service" />
 				</div>
 			</td></tr>
 		</tfoot>
--- conga/luci/site/luci/Extensions/ModelBuilder.py	2007/01/24 19:45:44	1.18
+++ conga/luci/site/luci/Extensions/ModelBuilder.py	2007/02/01 20:49:08	1.19
@@ -585,13 +585,13 @@
 
     raise GeneralError('FATAL',"Couldn't find service name in current list")
 
-  def retrieveXenVMsByName(self, name):
-    vms = self.getXENVMs()
+  def retrieveVMsByName(self, name):
+    vms = self.getVMs()
     for v in vms:
       if v.getName() == name:
         return v
 
-    raise GeneralError('FATAL',"Couldn't find xen vm name %s in current node list" % name)
+    raise GeneralError('FATAL',"Couldn't find VM name %s in current list" % name)
 
   def getFenceDevices(self):
     if self.fencedevices_ptr == None:
@@ -724,7 +724,7 @@
 
     return None
         
-  def getXENVMs(self):
+  def getVMs(self):
     rg_list = list()
     if self.resourcemanager_ptr != None:
       kids = self.resourcemanager_ptr.getChildren()
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/01/31 23:45:09	1.223
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/02/01 20:49:08	1.224
@@ -590,8 +590,6 @@
 			request.SESSION.set('add_node', add_cluster)
 			return (False, { 'errors': errors, 'messages': messages })
 
-		cp = model.getClusterPtr()
-		cp.incrementConfigVersion()
 		model.setModified(True)
 		conf_str = str(model.exportModelAsString())
 		if not conf_str:
@@ -838,8 +836,6 @@
 		return (False, {'errors': [ 'Unable to determine cluster name' ]})
 
 	try:
-		cp = model.getClusterPtr()
-		cp.incrementConfigVersion()
 		model.setModified(True)
 		conf = model.exportModelAsString()
 		if not conf:
@@ -1402,7 +1398,7 @@
       else:
         try:
           set_node_flag(self, clustername, rc.hostname(), batch_id,
-          CLUSTER_CONFIG, 'Adding new fence device \"%s\"' % retobj)
+            CLUSTER_CONFIG, 'Adding new fence device \"%s\"' % retobj)
         except:
           pass
 
@@ -1494,7 +1490,7 @@
       else:
         try:
           set_node_flag(self, clustername, rc.hostname(), batch_id,
-          CLUSTER_CONFIG, 'Updating fence device \"%s\"' % retobj)
+            CLUSTER_CONFIG, 'Updating fence device \"%s\"' % retobj)
         except:
           pass
 
@@ -1852,8 +1848,6 @@
   if error_code == FD_VAL_SUCCESS:
     messages.append(error_string)
     try:
-      cp = model.getClusterPtr()
-      cp.incrementConfigVersion()
       model.setModified(True)
       conf_str = model.exportModelAsString()
       if not conf_str:
@@ -1887,7 +1881,7 @@
       else:
         try:
           set_node_flag(self, clustername, rc.hostname(), batch_id,
-          CLUSTER_CONFIG, 'Updating cluster configuration')
+            CLUSTER_CONFIG, 'Removing fence device \"%s\"' % fencedev_name)
         except:
           pass
 
@@ -2012,21 +2006,33 @@
 	except KeyError, e:
 		isNew = True
 
-	if isNew is True:
-		xvm = Vm()
-		xvm.addAttribute('name', vm_name)
-		xvm.addAttribute('path', vm_path)
-		rmptr = model.getResourceManagerPtr()
-		rmptr.addChild(xvm)
-	else:
+	delete_vm = False
+	if request.form.has_key('delete'):
 		try:
-			xvm = model.retrieveXenVMsByName(old_name)
+			xvm = model.retrieveVMsByName(old_name)
 			if not xvm:
 				raise Exception, 'not found'
+			rmptr = model.getResourceManagerPtr()
+			rmptr.removeChild(xvm)
+			delete_vm = True
 		except:
-			return (False, {'errors': ['No virtual machine named \"%s\" exists.' % old_name ]})
-		xvm.addAttribute('name', vm_name)
-		xvm.addAttribute('path', vm_path)
+			return (False, {'errors': ['No virtual machine service named \"%s\" exists.' % old_name ]})
+	else:
+		if isNew is True:
+			xvm = Vm()
+			xvm.addAttribute('name', vm_name)
+			xvm.addAttribute('path', vm_path)
+			rmptr = model.getResourceManagerPtr()
+			rmptr.addChild(xvm)
+		else:
+			try:
+				xvm = model.retrieveVMsByName(old_name)
+				if not xvm:
+					raise Exception, 'not found'
+			except:
+				return (False, {'errors': ['No virtual machine service named \"%s\" exists.' % old_name ]})
+			xvm.addAttribute('name', vm_name)
+			xvm.addAttribute('path', vm_path)
 
 	try:
 		model.setModified(True)
@@ -2059,10 +2065,12 @@
 		return (False, {'errors': [ 'Error creating virtual machine %s.' % vm_name ]})
 
 	try:
-		if isNew is True:
-			set_node_flag(self, clustername, rc.hostname(), str(batch_number), XENVM_ADD, "Creating virtual machine \'%s\'" % vm_name)
+		if delete_vm is True:
+			set_node_flag(self, clustername, rc.hostname(), str(batch_number), VM_CONFIG, "Deleting virtual machine service \'%s\'" % vm_name)
+		elif isNew is True:
+			set_node_flag(self, clustername, rc.hostname(), str(batch_number), VM_ADD, "Creating virtual machine service \'%s\'" % vm_name)
 		else:
-			set_node_flag(self, clustername, rc.hostname(), str(batch_number), XENVM_CONFIG, "Configuring virtual machine \'%s\'" % vm_name)
+			set_node_flag(self, clustername, rc.hostname(), str(batch_number), VM_CONFIG, "Configuring virtual machine service \'%s\'" % vm_name)
 	except Exception, e:
 		luci_log.debug_verbose('validateVM6: failed to set flags: %s' % str(e))
 
@@ -2337,10 +2345,10 @@
   if model.getIsVirtualized() == True:
     vmadd = {}
     vmadd['Title'] = "Add a Virtual Service"
-    vmadd['cfg_type'] = "xenvmadd"
-    vmadd['absolute_url'] = url + "?pagetype=" + XENVM_ADD + "&clustername=" + cluname
+    vmadd['cfg_type'] = "vmadd"
+    vmadd['absolute_url'] = url + "?pagetype=" + VM_ADD + "&clustername=" + cluname
     vmadd['Description'] = "Add a Virtual Service to this cluster"
-    if pagetype == XENVM_ADD:
+    if pagetype == VM_ADD:
       vmadd['currentItem'] = True
     else:
       vmadd['currentItem'] = False
@@ -2360,7 +2368,7 @@
     svcfg['currentItem'] = False
 
   services = model.getServices()
-  xenvms = model.getXENVMs()
+  vms = model.getVMs()
   serviceable = list()
   for service in services:
     servicename = service.getName()
@@ -2383,19 +2391,19 @@
 
     serviceable.append(svc)
 
-  for xenvm in xenvms:
-    xenname = xenvm.getName()
+  for vm in vms:
+    name = vm.getName()
     svc = {}
-    svc['Title'] = xenname
-    svc['cfg_type'] = "xenvm"
-    svc['absolute_url'] = url + "?pagetype=" + XENVM_CONFIG + "&servicename=" + xenname + "&clustername=" + cluname
+    svc['Title'] = name
+    svc['cfg_type'] = "vm"
+    svc['absolute_url'] = url + "?pagetype=" + VM_CONFIG + "&servicename=" + name + "&clustername=" + cluname
     svc['Description'] = "Configure this Virtual Service"
-    if pagetype == XENVM_CONFIG:
+    if pagetype == VM_CONFIG:
       try:
         xname = request['servicename']
       except KeyError, e:
         xname = ""
-      if xenname == xname:
+      if name == xname:
         svc['currentItem'] = True
       else:
         svc['currentItem'] = False
@@ -3996,8 +4004,6 @@
 				% (delete_target.getName(), str(e)))
 
 		try:
-			cp = model.getClusterPtr()
-			cp.incrementConfigVersion()
 			model.setModified(True)
 			str_buf = model.exportModelAsString()
 			if not str_buf:
@@ -4823,14 +4829,12 @@
 
 	return getNodeLogs(rc)
 
-def processXenVM(self, req):
-	pass
-
-def getXenVMInfo(self, model, request):
+def getVMInfo(self, model, request):
   map = {}
   baseurl = request['URL']
   clustername = request['clustername']
   svcname = None
+
   try:
     svcname = request['servicename']
   except KeyError, e:
@@ -4842,22 +4846,22 @@
   map['formurl'] = urlstring
 
   try:
-    xenvmname = request['servicename']
+    vmname = request['servicename']
   except:
     try:
-      xenvmname = request.form['servicename']
+      vmname = request.form['servicename']
     except:
       luci_log.debug_verbose('servicename is missing from request')
       return map
 
   try:
-    xenvm = model.retrieveXenVMsByName(xenvmname)
+    vm = model.retrieveVMsByName(vmname)
   except:
     luci_log.debug('An error occurred while attempting to get VM %s' \
-    % xenvmname)
+      % vmname)
     return map
 
-  attrs= xenvm.getAttributes()
+  attrs= vm.getAttributes()
   keys = attrs.keys()
   for key in keys:
     map[key] = attrs[key]
@@ -5305,8 +5309,6 @@
 		return (False, {'errors': [ '%s: error removing service %s.' % (errstr, name) ]})
 
 	try:
-		cp = model.getClusterPtr()
-		cp.incrementConfigVersion()
 		model.setModified(True)
 		conf = model.exportModelAsString()
 		if not conf:
@@ -5387,8 +5389,6 @@
 		return errstr + ': the specified resource was not found.'
 
 	try:
-		cp = model.getClusterPtr()
-		cp.incrementConfigVersion()
 		model.setModified(True)
 		conf = model.exportModelAsString()
 		if not conf:
@@ -6702,8 +6702,6 @@
 		return 'Unable to add the new resource'
 
 	try:
-		cp = model.getClusterPtr()
-		cp.incrementConfigVersion()
 		model.setModified(True)
 		conf = model.exportModelAsString()
 		if not conf:
--- conga/luci/site/luci/Extensions/conga_constants.py	2007/01/23 13:53:36	1.35
+++ conga/luci/site/luci/Extensions/conga_constants.py	2007/02/01 20:49:08	1.36
@@ -13,8 +13,8 @@
 NODE_ADD="15"
 NODE_PROCESS="16"
 NODE_LOGS="17"
-XENVM_ADD="18"
-XENVM_CONFIG="19"
+VM_ADD="18"
+VM_CONFIG="19"
 SERVICES="20"
 SERVICE_ADD="21"
 SERVICE_LIST="22"
@@ -24,7 +24,7 @@
 SERVICE_START="26"
 SERVICE_STOP="27"
 SERVICE_RESTART="28"
-XENVM_PROCESS="29"
+VM_PROCESS="29"
 RESOURCES="30"
 RESOURCE_ADD="31"
 RESOURCE_LIST="32"



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-01-31 23:36 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-01-31 23:36 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2007-01-31 23:36:26

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 

Log message:
	fix vm service code

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.168&r2=1.169
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.221&r2=1.222

--- conga/luci/cluster/form-macros	2007/01/31 05:26:44	1.168
+++ conga/luci/cluster/form-macros	2007/01/31 23:36:26	1.169
@@ -3735,31 +3735,93 @@
 </div>
 
 <div metal:define-macro="xenvmadd-form">
-  <span tal:define="global vmforminfo python: here.getXenVMInfo(modelb, request)"/>
-  <form method="get" tal:attributes="action vmforminfo/formurl">
-  <h4>Name for this VM: </h4><input type="text" name="xenvmname" value=""/>
-  <h4>Path to configuration file: </h4><input type="text" name="xenvmpath" value=""/>
-  <input type="submit" value="Create Xen VM"/>
-  </form>
+<form method="post" action="">
+	<input type="hidden" name="clustername"
+		tal:attributes="value request/clustername | nothing" />
+
+	<input type="hidden" name="pagetype"
+		tal:attributes="value request/pagetype | nothing" />
+
+	<div class="service_comp_list">
+	<table class="systemsTable">
+		<thead class="systemsTable">
+			<tr class="systemsTable"><td class="systemsTable">
+				<p class="reshdr">Create a Virtual Machine Service</p>
+			</td></tr>
+		<tfoot class="systemsTable">
+			<tr class="systemsTable"><td colspan="2">
+				<div class="hbSubmit">
+					<input type="submit" value="Create Virtual Machine" />
+				</div>
+			</td></tr>
+		</tfoot>
+		<tbody class="systemsTable">
+			<tr class="systemsTable">
+				<td>Virtual machine name</td>
+				<td><input type="text" name="vmname" value="" /></td>
+			</tr>
+			<tr class="systemsTable">
+				<td>Path to VM configuration file</td>
+				<td><input type="text" name="vmpath" value="" /></td>
+			</tr>
+		</tbody>
+	</table>
+	</div>
+</form>
 </div>
 
 <div metal:define-macro="xenvmconfig-form">
-  <h4>Properties for Xen VM <font color="green"><span tal:content="request/servicename"/></font></h4>
-  <span tal:define="global xeninfo python:here.getXenVMInfo(modelb, request)">
-  <form method="get" tal:attributes="action xeninfo/formurl">
-  <h4>Name of VM: </h4><input type="text" name="xenvmname" value="" tal:attributes="value xeninfo/name"/>
-  <h4>Path to configuration file: </h4><input type="text" name="xenvmpath" value="" tal:attributes="value xeninfo/path"/>
-  <input type="button" value="Delete"/>
-  <input type="submit" value="Update"/>
-  </form>
- </span>
+<form method="post" action=""
+	tal:define="vminfo python:here.getXenVMInfo(modelb, request)">
+
+	<input type="hidden" name="clustername"
+		tal:attributes="value request/clustername | nothing" />
+
+	<input type="hidden" name="pagetype"
+		tal:attributes="value request/pagetype | nothing" />
+
+	<input type="hidden" name="oldname"
+		tal:attributes="value vminfo/name | nothing" />
+
+	<div class="service_comp_list">
+	<table class="systemsTable">
+		<thead class="systemsTable">
+			<tr class="systemsTable"><td class="systemsTable">
+				<p class="reshdr">Properties for <tal:block tal:replace="vminfo/name | string:virtual machine"/></p>
+			</td></tr>
+		<tfoot class="systemsTable">
+			<tr class="systemsTable"><td colspan="2">
+				<div class="hbSubmit">
+					<input name="submit" type="submit" value="Update Virtual Machine" />
+					<input name="delete" type="submit" value="Delete Virtual Machine" />
+				</div>
+			</td></tr>
+		</tfoot>
+		<tbody class="systemsTable">
+			<tr class="systemsTable">
+				<td>Virtual machine name</td>
+				<td>
+					<input type="text" name="vmname"
+						tal:attributes="value vminfo/name | nothing" />
+				</td>
+			</tr>
+			<tr class="systemsTable">
+				<td>Path to VM configuration file</td>
+				<td>
+					<input type="text" name="vmpath"
+						tal:attributes="value vminfo/path | nothing" />
+				</td>
+			</tr>
+		</tbody>
+	</table>
+	</div>
+</form>
 </div>
 
 <div metal:define-macro="xenvmprocess">
 	<span tal:define="retrn python:here.processXenVM(request)"/>
 </div>
 
-
 <div metal:define-macro="serviceadd-form">
 	<script type="text/javascript">
 		set_page_title('Luci ??? cluster ??? services ??? Add a new service');
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/01/31 19:28:08	1.221
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/01/31 23:36:26	1.222
@@ -1980,10 +1980,101 @@
 
 	response.redirect(request['URL'] + "?pagetype=" + NODE + "&clustername=" + clustername + '&nodename=' + nodename + '&busyfirst=true')
 
+def validateVM(self, request):
+	errors = list()
+
+	model = request.SESSION.get('model')
+
+	try:
+		vm_name = request.form['vmname'].strip()
+		if not vm_name:
+			raise Exception, 'blank'
+	except Exception, e:
+		luci_log.debug_verbose('validateVM0: no vm name: %s' % str(e))
+		errors.append('No virtual machine name was given.')
+
+	try:
+		vm_path = request.form['vmpath'].strip()
+		if not vm_path:
+			raise 'blank'
+	except Exception, e:
+		luci_log.debug_verbose('validateVM1: no vm path: %s' % str(e))
+		errors.append('No path to the virtual machine configuration file was given.')
+
+	if len(errors) > 0:
+		return (False, {'errors': errors })
+
+	isNew = False
+	try:
+		old_name = request.form['oldname'].strip()
+		if not old_name:
+			raise KeyError, 'oldname'
+	except KeyError, e:
+		isNew = True
+
+	if isNew is True:
+		xvm = Vm()
+		xvm.addAttribute('name', vm_name)
+		xvm.addAttribute('path', vm_path)
+		rmptr = model.getResourceManagerPtr()
+		rmptr.addChild(xvm)
+	else:
+		try:
+			xvm = model.retrieveXenVMsByName(old_name)
+			if not xvm:
+				raise Exception, 'not found'
+		except:
+			return (False, {'errors': ['No virtual machine named \"%s\" exists.' % old_name ]})
+		xvm.addAttribute('name', vm_name)
+		xvm.addAttribute('path', vm_path)
+
+	try:
+		model.setModified(True)
+		stringbuf = str(model.exportModelAsString())
+		if not stringbuf:
+			raise Exception, 'model is blank'
+	except Exception, e:
+		luci_log.debug_verbose('validateVM2: %s' % str(e))
+		errors.append('Unable to update the cluster model')
+
+	try:
+		clustername = model.getClusterName()
+		if not clustername:
+			raise Exception, 'cluster name from model.getClusterName() is blank'
+	except Exception, e:
+		luci_log.debug_verbose('validateVM3: %s' % str(e))
+		errors.append('Unable to determine the cluster name.')
+
+	if len(errors) > 0:
+		return (False, {'errors': errors })
+
+	rc = getRicciAgent(self, clustername)
+	if not rc:
+		luci_log.debug_verbose('validateVM4: no ricci for %s' % clustername)
+		return (False, {'errors': ['Unable to contact a ricci agent for this cluster.']})
+
+	batch_number, result = setClusterConf(rc, stringbuf)
+	if batch_number is None or result is None:
+		luci_log.debug_verbose('validateVM5: missing batch and/or result')
+		return (False, {'errors': [ 'Error creating virtual machine %s.' % vm_name ]})
+
+	try:
+		if isNew is True:
+			set_node_flag(self, clustername, rc.hostname(), str(batch_number), XENVM_ADD, "Creating virtual machine \'%s\'" % vm_name)
+		else:
+			set_node_flag(self, clustername, rc.hostname(), str(batch_number), XENVM_CONFIG, "Configuring virtual machine \'%s\'" % vm_name)
+	except Exception, e:
+		luci_log.debug_verbose('validateVM6: failed to set flags: %s' % str(e))
+
+	response = request.RESPONSE
+	response.redirect(request['URL'] + "?pagetype=" + SERVICES + "&clustername=" + clustername + '&busyfirst=true')
+
 formValidators = {
 	6: validateCreateCluster,
 	7: validateConfigCluster,
 	15: validateAddClusterNode,
+	18: validateVM,
+	19: validateVM,
 	21: validateServiceAdd,
 	24: validateServiceAdd,
 	31: validateResourceAdd,
@@ -2959,7 +3050,10 @@
 			itemmap['cfgurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + SERVICE
 			itemmap['delurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + SERVICE_DELETE
 
-			svc = model.retrieveServiceByName(item['name'])
+			try:
+				svc = model.retrieveServiceByName(item['name'])
+			except:
+				continue
 			dom = svc.getAttribute("domain")
 			if dom is not None:
 				itemmap['faildom'] = dom
@@ -4727,49 +4821,7 @@
 	return getNodeLogs(rc)
 
 def processXenVM(self, req):
-  model = req.SESSION.get('model')
-  isNew = False
-  try:
-    xenvmname = req['servicename']
-  except KeyError, e:
-    isNew = True
-
-  if isNew == True:
-    xvm = Vm()
-    xvm.addAttribute("name", req.form['xenvmname'])
-    xvm.addAttribute("path", req.form['xenvmpath'])
-    rmptr = model.getResourceManagerPtr()
-    rmptr.addChild(xvm)
-  else:
-    xvm = model.retrieveXenVMsByName(self, xenvmname)
-    xvm.addAttribute("name", req.form['xenvmname'])
-    xvm.addAttribute("path", req.form['xenvmpath'])
-
-  try:
-    cp = model.getClusterPtr()
-    cp.incrementConfigVersion()
-    model.setModified(True)
-    stringbuf = model.exportModelAsString()
-    if not stringbuf:
-      raise Exception, 'model is blank'
-  except Exception, e:
-    luci_log.debug_verbose('exportModelAsString error: %s' % str(e))
-    return None
-
-  try:
-    clustername = model.getClusterName()
-    if not clustername:
-      raise Exception, 'cluster name from model.getClusterName() is blank'
-  except Exception, e:
-    luci_log.debug_verbose('error: getClusterName: %s' % str(e))
-    return None
-
-  rc = getRicciAgent(self, clustername)
-  if not rc:
-    luci_log.debug_verbose('Unable to find a ricci agent for the %s cluster' % clustername)
-    return None
-
-  setClusterConf(rc, stringbuf)
+	pass
 
 def getXenVMInfo(self, model, request):
   map = {}



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-01-31  5:26 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-01-31  5:26 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2007-01-31 05:26:45

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py ricci_bridge.py 

Log message:
	GULM cluster deployment

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.167&r2=1.168
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.218&r2=1.219
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ricci_bridge.py.diff?cvsroot=cluster&r1=1.54&r2=1.55

--- conga/luci/cluster/form-macros	2007/01/30 22:26:00	1.167
+++ conga/luci/cluster/form-macros	2007/01/31 05:26:44	1.168
@@ -309,7 +309,8 @@
 									tal:attributes="checked python: add_cluster and 'lockmanager' in add_cluster and add_cluster['lockmanager'] == 'gulm'"
 								>GULM
 							</li>
-							<div id="gulm_lockservers" class="invisible">
+							<div id="gulm_lockservers"
+								tal:attributes="class python: (add_cluster and 'lockmanager' in add_cluster and add_cluster['lockmanager'] != 'gulm') and 'invisible' or ''">
 								<fieldset>
 								<legend class="rescfg">GULM lock server properties</legend>
 								<p>You must enter exactly 1, 3, or 5 GULM lock servers.</p>
@@ -322,7 +323,7 @@
 												name="__GULM__:server1"
 												tal:attributes="
 													disabled python: not add_cluster or not 'lockmanager' in add_cluster or add_cluster['lockmanager'] != 'gulm';
-													value gulm_lockservers/server1 | nothing" />
+													value add_cluster/gulm_lockservers/server1 | nothing" />
 										</td>
 									</tr>
 									<tr>
@@ -332,7 +333,7 @@
 												name="__GULM__:server2"
 												tal:attributes="
 													disabled python: not add_cluster or not 'lockmanager' in add_cluster or add_cluster['lockmanager'] != 'gulm';
-													value gulm_lockservers/server2 | nothing" />
+													value add_cluster/gulm_lockservers/server2 | nothing" />
 										</td>
 									</tr>
 									<tr>
@@ -342,7 +343,7 @@
 												name="__GULM__:server3"
 												tal:attributes="
 													disabled python: not add_cluster or not 'lockmanager' in add_cluster or add_cluster['lockmanager'] != 'gulm';
-													value gulm_lockservers/server3 | nothing" />
+													value add_cluster/gulm_lockservers/server3 | nothing" />
 										</td>
 									</tr>
 									<tr>
@@ -352,7 +353,7 @@
 												name="__GULM__:server4"
 												tal:attributes="
 													disabled python: not add_cluster or not 'lockmanager' in add_cluster or add_cluster['lockmanager'] != 'gulm';
-													value gulm_lockservers/server4 | nothing" />
+													value add_cluster/gulm_lockservers/server4 | nothing" />
 										</td>
 									</tr>
 									<tr>
@@ -362,7 +363,7 @@
 												name="__GULM__:server5"
 												tal:attributes="
 													disabled python: not add_cluster or not 'lockmanager' in add_cluster or add_cluster['lockmanager'] != 'gulm';
-													value gulm_lockservers/server5 | nothing" />
+													value add_cluster/gulm_lockservers/server5 | nothing" />
 										</td>
 									</tr>
 								</table>
@@ -3735,7 +3736,7 @@
 
 <div metal:define-macro="xenvmadd-form">
   <span tal:define="global vmforminfo python: here.getXenVMInfo(modelb, request)"/>
-  <form method="get" action="" tal:attributes="action vmforminfo/formurl">
+  <form method="get" tal:attributes="action vmforminfo/formurl">
   <h4>Name for this VM: </h4><input type="text" name="xenvmname" value=""/>
   <h4>Path to configuration file: </h4><input type="text" name="xenvmpath" value=""/>
   <input type="submit" value="Create Xen VM"/>
@@ -3745,7 +3746,7 @@
 <div metal:define-macro="xenvmconfig-form">
   <h4>Properties for Xen VM <font color="green"><span tal:content="request/servicename"/></font></h4>
   <span tal:define="global xeninfo python:here.getXenVMInfo(modelb, request)">
-  <form method="get" action="" tal:attributes="action xeninfo/formurl">
+  <form method="get" tal:attributes="action xeninfo/formurl">
   <h4>Name of VM: </h4><input type="text" name="xenvmname" value="" tal:attributes="value xeninfo/name"/>
   <h4>Path to configuration file: </h4><input type="text" name="xenvmpath" value="" tal:attributes="value xeninfo/path"/>
   <input type="button" value="Delete"/>
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/01/30 21:41:56	1.218
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/01/31 05:26:45	1.219
@@ -235,6 +235,46 @@
 	if len(clusterName) > 15:
 		errors.append('A cluster\'s name must be less than 16 characters long.')
 
+	try:
+		cluster_os = add_cluster['cluster_os']
+	except:
+		pass
+
+	lockmanager = 'dlm'
+	if cluster_os == 'rhel4':
+		add_cluster['gulm_support'] = True
+		if not request.form.has_key('lockmanager'):
+			# The user hasn't been presented with the RHEL4
+			# lock manager options yet.
+			incomplete = True
+		else:
+			try:
+				lockmanager = request.form['lockmanager'].strip()
+			except:
+				lockmanager = 'dlm'
+
+	lockservers = None
+	if lockmanager == 'gulm':
+		add_cluster['lockmanager'] = 'gulm'
+		try:
+			lockservers = filter(lambda x: x.strip(), request.form['__GULM__'])
+			if not lockservers or len(lockservers) < 1:
+				raise Exception, 'blank'
+			num_lockservers = len(lockservers)
+			if not num_lockservers in (1, 3, 5):
+				errors.append('You must have exactly 1, 3, or 5 GULM lock servers. You submitted %d lock servers.' % num_lockservers)
+		except:
+			errors.append('No lock servers were given.')
+
+		if len(errors) > 0:
+			try:
+				ls_hash = {}
+				for i in xrange(num_lockservers):
+					ls_hash['server%d' % (i + 1)] = lockservers[i]
+				add_cluster['gulm_lockservers'] = ls_hash
+			except:
+				pass
+
 	if incomplete or len(errors) > 0:
 		request.SESSION.set('create_cluster', add_cluster)
 		return (False, { 'errors': errors, 'messages': messages })
@@ -248,7 +288,8 @@
 					True,
 					add_cluster['shared_storage'],
 					False,
-					add_cluster['download_pkgs'])
+					add_cluster['download_pkgs'],
+					lockservers)
 
 	if not batchNode:
 		request.SESSION.set('create_cluster', add_cluster)
@@ -1638,7 +1679,7 @@
 					# games), so it's safe to pull the existing entry from
 					# the model. All we need is the device name, and nothing
 					# else needs to be done here.
-					# 
+					#
 					# For an existing non-shared device update the device
 					# in the model, since the user could have edited it.
 					retcode, retmsg = validateFenceDevice(fence_form, model)
--- conga/luci/site/luci/Extensions/ricci_bridge.py	2007/01/08 19:46:50	1.54
+++ conga/luci/site/luci/Extensions/ricci_bridge.py	2007/01/31 05:26:45	1.55
@@ -153,7 +153,8 @@
 		       install_services,
 		       install_shared_storage,
 		       install_LVS,
-		       upgrade_rpms):
+		       upgrade_rpms,
+		       gulm_lockservers):
 	
 	batch = '<?xml version="1.0" ?>'
 	batch += '<batch>'
@@ -228,12 +229,19 @@
 			batch += '<clusternode name="' + i + '" votes="1" nodeid="' + str(x) + '" />'
 		x = x + 1
 	batch += '</clusternodes>'
-	if len(nodeList) == 2:
-		batch += '<cman expected_votes="1" two_node="1"/>'
-	else:
-		batch += '<cman/>'
+
+	if not gulm_lockservers:
+		if len(nodeList) == 2:
+			batch += '<cman expected_votes="1" two_node="1"/>'
+		else:
+			batch += '<cman/>'
 	batch += '<fencedevices/>'
 	batch += '<rm/>'
+	if gulm_lockservers:
+		batch += '<gulm>'
+		for i in gulm_lockservers:
+			batch += '<lockserver name="%s" />' % i
+		batch += '</gulm>'
 	batch += '</cluster>'
 	batch += '</var>'
 	batch += '</function_call>'



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-01-23 13:53 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-01-23 13:53 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2007-01-23 13:53:36

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: Ip.py cluster_adapters.py 
	                           conga_constants.py 

Log message:
	GULM support for RHEL4 clusters.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.157&r2=1.158
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Ip.py.diff?cvsroot=cluster&r1=1.1&r2=1.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.209&r2=1.210
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/conga_constants.py.diff?cvsroot=cluster&r1=1.34&r2=1.35

--- conga/luci/cluster/form-macros	2007/01/22 21:18:58	1.157
+++ conga/luci/cluster/form-macros	2007/01/23 13:53:35	1.158
@@ -2828,7 +2828,13 @@
 				<span tal:attributes="class python: 'cluster node ' + status_class"
 					tal:content="python: cluster_node_status_str" />
 			</td>
+		</tr>
 
+		<tr class="cluster node info_middle"
+			tal:condition="nodeinfo/gulm_lockserver">
+			<td class="cluster node node_status" colspan="2">
+				This node is a GULM lock server.
+			</td>
 		</tr>
 
 		<tr class="cluster node info_bottom"
@@ -3230,6 +3236,13 @@
 				</td>
 			</tr>
 
+			<tr class="node info_middle"
+				tal:condition="nd/gulm_lockserver">
+				<td class="node node_status" colspan="2">
+					This node is a GULM lock server.
+				</td>
+			</tr>
+
 			<tr class="node info_bottom">
 				<td class="node node_services">
 					<strong class="cluster node">Services on this Node:</strong>
--- conga/luci/site/luci/Extensions/Ip.py	2006/05/30 20:17:21	1.1
+++ conga/luci/site/luci/Extensions/Ip.py	2007/01/23 13:53:36	1.2
@@ -6,7 +6,7 @@
 _ = gettext.gettext
 
 TAG_NAME = "ip"
-RESOURCE_TYPE=_("IP Address: ")
+RESOURCE_TYPE=_("IP Address")
 
 class Ip(BaseResource):
   def __init__(self):
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/01/22 17:06:48	1.209
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/01/23 13:53:36	1.210
@@ -16,6 +16,7 @@
 from NFSClient import NFSClient
 from NFSExport import NFSExport
 from Service import Service
+from Lockserver import Lockserver
 from Netfs import Netfs
 from Apache import Apache
 from MySQL import MySQL
@@ -1136,11 +1137,32 @@
 
 	return (True, {})
 
+def validateGULMConfig(model, form):
+	gulm_ptr = model.getGULMPtr()
+	if not gulm_ptr:
+		return (False, {'errors': [ 'This cluster appears not to be using GULM locking.' ]})
+	node_list = map(lambda x: x.getName(), model.getNodes())
+
+	gulm_lockservers = list()
+	for node in node_list:
+		if form.has_key(node) and form[node] == 'on':
+			ls = Lockserver()
+			ls.addAttribute('name', node)
+			gulm_lockservers.append(ls)
+
+	num_ls = len(gulm_lockservers)
+	if not num_ls in (1, 3, 4, 5):
+		return (False, {'errors': [ 'You must have exactly 1, 3, 4, or 5 GULM lock servers. You selected %d nodes as lock servers.' % num_ls ]})
+
+	model.GULM_ptr.children = gulm_lockservers
+	return (True, {})
+
 configFormValidators = {
 	'general': validateGeneralConfig,
 	'mcast': validateMCastConfig,
 	'fence': validateFenceConfig,
-	'qdisk': validateQDiskConfig
+	'qdisk': validateQDiskConfig,
+	'gulm': validateGULMConfig
 }
 
 def validateConfigCluster(self, request):
@@ -3331,6 +3353,22 @@
     clumap['mcast_addr'] = "1.2.3.4"
 
   #-------------
+  #GULM params (rhel4 only)
+  gulm_ptr = model.getGULMPtr()
+  if gulm_ptr:
+    lockserv_list = list()
+    clunodes = model.getNodes()
+    gulm_lockservs = map(lambda x: x.getName(), gulm_ptr.getChildren())
+    for node in clunodes:
+      n = node.getName()
+      lockserv_list.append((n, n in gulm_lockservs))
+    clumap['gulm'] = True
+    clumap['gulm_url'] = prop_baseurl + PROPERTIES_TAB + '=' + PROP_GULM_TAB
+    clumap['gulm_lockservers'] = lockserv_list
+  else:
+    clumap['gulm'] = False
+
+  #-------------
   #quorum disk params
   quorumd_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_QDISK_TAB
   clumap['quorumd_url'] = quorumd_url
@@ -3569,18 +3607,18 @@
 			rc = RicciCommunicator(nodename_resolved)
 		except Exception, e:
 			luci_log.debug_verbose('CStop0: [%d] RC %s: %s' \
-				% (delete, nodename_resolved, str(e)))
+				% (delete is True, str(nodename_resolved), str(e)))
 			errors += 1
 			continue
 
 		if delete is True:
 			if nodeDelete(self, rc, model, clustername, nodename, nodename_resolved, delete_cluster=True) is None:
-				luci_log.debug_verbose('CStop1: nodeDelete failed')
+				luci_log.debug_verbose('CStop1: [1] nodeDelete failed')
 				errors += 1
 		else:
 			if nodeLeave(self, rc, clustername, nodename_resolved) is None:
-				luci_log.debug_verbose('CStop2: nodeLeave %s' \
-					% (delete, nodename_resolved))
+				luci_log.debug_verbose('CStop2: [0] nodeLeave %s' \
+					% (nodename_resolved))
 				errors += 1
 	return errors
 
@@ -4026,6 +4064,7 @@
 
   fdom_dict_list = list()
   if model:
+    infohash['gulm_lockserver'] = model.isNodeLockserver(nodename)
     #next is faildoms
     fdoms = model.getFailoverDomainsForNode(nodename)
     for fdom in fdoms:
@@ -4034,6 +4073,8 @@
       fdomurl = baseurl + "?" + PAGETYPE + "=" + FDOM_CONFIG + "&" + CLUNAME + "=" + clustername + "&fdomname=" + fdom.getName()
       fdom_dict['fdomurl'] = fdomurl
       fdom_dict_list.append(fdom_dict)
+  else:
+    infohash['gulm_lockserver'] = False
 
   infohash['fdoms'] = fdom_dict_list
 
@@ -4104,6 +4145,8 @@
     map = {}
     name = item['name']
     map['nodename'] = name
+    map['gulm_lockserver'] = model.isNodeLockserver(name)
+
     try:
       baseurl = req['URL']
     except:
--- conga/luci/site/luci/Extensions/conga_constants.py	2007/01/11 22:49:42	1.34
+++ conga/luci/site/luci/Extensions/conga_constants.py	2007/01/23 13:53:36	1.35
@@ -73,6 +73,7 @@
 PROP_FENCE_TAB = '2'
 PROP_MCAST_TAB = '3'
 PROP_QDISK_TAB = '4'
+PROP_GULM_TAB = '5'
 
 PAGETYPE="pagetype"
 ACTIONTYPE="actiontype"



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-01-15 18:21 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-01-15 18:21 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	rmccabe at sourceware.org	2007-01-15 18:21:50

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: FenceHandler.py cluster_adapters.py 
	                           conga_constants.py 

Log message:
	changes related to bz212021 that address bugs found during testing of the original fix

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.90.2.15&r2=1.90.2.16
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FenceHandler.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.4.2.2&r2=1.4.2.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.120.2.18&r2=1.120.2.19
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/conga_constants.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.19.2.6&r2=1.19.2.7

--- conga/luci/cluster/form-macros	2007/01/10 23:49:18	1.90.2.15
+++ conga/luci/cluster/form-macros	2007/01/15 18:21:50	1.90.2.16
@@ -2046,7 +2046,7 @@
 			<tr>
 				<td>Authentication Type</td>
 				<td>
-					<input name="auth_type" type="text" title="Options are to leave blank for none, password, md2, or md5"
+					<input name="auth_type" type="text" title="Options are to leave blank for none, password, or md5"
 						tal:attributes="value cur_fencedev/auth_type | nothing" />
 				</td>
 			</tr>
@@ -2751,6 +2751,17 @@
 				<input type="submit" value="Go"/>
 				</form>
 			</td>
+
+			<td class="cluster node node_action"
+				tal:condition="python: nodeinfo['nodestate'] != '0' and nodeinfo['nodestate'] != '1'">
+				<form method="post" onSubmit="return dropdown(this.gourl)">
+				<select name="gourl">
+					<option value="">Choose a Task...</option>
+					<option tal:attributes="value nodeinfo/fence_url | nothing">Fence this node</option>
+				</select>
+				<input type="submit" value="Go"/>
+				</form>
+			</td>
 		</tr>
 
 		<tr class="cluster node info_middle">
@@ -3143,6 +3154,15 @@
 						<input type="submit" value="Go"/>
 					</form>
 				</td>
+				<td class="node node_action" tal:condition="python: nd['status'] != '0' and nd['status'] != '1'">
+					<form method="post" onSubmit="return dropdown(this.gourl)">
+						<select class="node" name="gourl">
+							<option value="">Choose a Task...</option>
+							<option tal:attributes="value nd/fence_it_url | nothing">Fence this node</option>
+						</select>
+						<input type="submit" value="Go"/>
+					</form>
+				</td>
 			</tr>
 
 			<tr class="node info_middle">
--- conga/luci/site/luci/Extensions/FenceHandler.py	2006/12/22 17:50:16	1.4.2.2
+++ conga/luci/site/luci/Extensions/FenceHandler.py	2007/01/15 18:21:50	1.4.2.3
@@ -1058,7 +1058,7 @@
 
   if agent_type == "fence_apc":
     try:
-      ip = form['ip_addr']
+      ip = form['ipaddr']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_IP)
     try:
@@ -1066,7 +1066,7 @@
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_LOGIN)
     try:
-      pwd = form['password']
+      pwd = form['passwd']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_PASSWD)
 
@@ -1082,11 +1082,11 @@
 
   elif agent_type == "fence_wti":
     try:
-      ip = form['ip_addr']
+      ip = form['ipaddr']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_IP)
     try:
-      pwd = form['password']
+      pwd = form['passwd']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_PASSWD)
 
@@ -1101,7 +1101,7 @@
 
   elif agent_type == "fence_brocade":
     try:
-      ip = form['ip_addr']
+      ip = form['ipaddr']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_IP)
     try:
@@ -1109,7 +1109,7 @@
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_LOGIN)
     try:
-      pwd = form['password']
+      pwd = form['passwd']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_PASSWD)
 
@@ -1125,11 +1125,11 @@
 
   elif agent_type == "fence_vixel":
     try:
-      ip = form['ip_addr']
+      ip = form['ipaddr']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_IP)
     try:
-      pwd = form['password']
+      pwd = form['passwd']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_PASSWD)
 
@@ -1145,7 +1145,7 @@
 
   elif agent_type == "fence_mcdata":
     try:
-      ip = form['ip_addr']
+      ip = form['ipaddr']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_IP)
     try:
@@ -1153,7 +1153,7 @@
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_LOGIN)
     try:
-      pwd = form['password']
+      pwd = form['passwd']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_PASSWD)
 
@@ -1199,7 +1199,7 @@
 
   elif agent_type == "fence_sanbox2":
     try:
-      ip = form['ip_addr']
+      ip = form['ipaddr']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_IP)
     try:
@@ -1207,7 +1207,7 @@
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_LOGIN)
     try:
-      pwd = form['password']
+      pwd = form['passwd']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_PASSWD)
 
@@ -1223,7 +1223,7 @@
 
   elif agent_type == "fence_bladecenter":
     try:
-      ip = form['ip_addr']
+      ip = form['ipaddr']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_IP)
     try:
@@ -1231,7 +1231,7 @@
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_LOGIN)
     try:
-      pwd = form['password']
+      pwd = form['passwd']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_PASSWD)
 
@@ -1247,7 +1247,7 @@
 
   elif agent_type == "fence_bullpap":
     try:
-      ip = form['ip_addr']
+      ip = form['ipaddr']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_IP)
     try:
@@ -1255,7 +1255,7 @@
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_LOGIN)
     try:
-      pwd = form['password']
+      pwd = form['passwd']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_PASSWD)
 
@@ -1295,6 +1295,7 @@
 def validateFenceDevice(form, model): 
   from FenceDevice import FenceDevice
   namechange = False
+
   try:
     agent_type = form['fence_type']
   except KeyError, e:
@@ -1309,7 +1310,7 @@
     return (FD_VAL_FAIL, "No device name in form submission")
 
   if fencedev_name == "":
-    return (1, "A unique name is required for every fence device")
+    return (1, "No device name in form submission")
 
   try:
     orig_name = form['orig_name']
@@ -1319,10 +1320,12 @@
   if orig_name != fencedev_name:
     namechange = True
 
-  fencedevs = model.getFenceDevices()
-  for fd in fencedevs:
-    if fd.getName().strip() == fencedev_name:
-      return (FD_VAL_FAIL, FD_PROVIDE_NAME)
+    fencedevs = model.getFenceDevices()
+    for fd in fencedevs:
+      if fd.getName().strip() == fencedev_name:
+        return (FD_VAL_FAIL, FD_PROVIDE_NAME)
+  else:
+    fencedevs = model.getFenceDevices()
 
   #Now we know name is unique...find device now
   fencedev = None
@@ -1336,7 +1339,7 @@
 
   if agent_type == "fence_apc":
     try:
-      ip = form['ip_addr']
+      ip = form['ipaddr']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_IP)
     try:
@@ -1344,7 +1347,7 @@
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_LOGIN)
     try:
-      pwd = form['password']
+      pwd = form['passwd']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_PASSWD)
 
@@ -1359,11 +1362,11 @@
 
   elif agent_type == "fence_wti":
     try:
-      ip = form['ip_addr']
+      ip = form['ipaddr']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_IP)
     try:
-      pwd = form['password']
+      pwd = form['passwd']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_PASSWD)
 
@@ -1377,7 +1380,7 @@
 
   elif agent_type == "fence_brocade":
     try:
-      ip = form['ip_addr']
+      ip = form['ipaddr']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_IP)
     try:
@@ -1385,7 +1388,7 @@
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_LOGIN)
     try:
-      pwd = form['password']
+      pwd = form['passwd']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_PASSWD)
 
@@ -1400,11 +1403,11 @@
 
   elif agent_type == "fence_vixel":
     try:
-      ip = form['ip_addr']
+      ip = form['ipaddr']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_IP)
     try:
-      pwd = form['password']
+      pwd = form['passwd']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_PASSWD)
 
@@ -1419,7 +1422,7 @@
 
   elif agent_type == "fence_mcdata":
     try:
-      ip = form['ip_addr']
+      ip = form['ipaddr']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_IP)
     try:
@@ -1427,7 +1430,7 @@
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_LOGIN)
     try:
-      pwd = form['password']
+      pwd = form['passwd']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_PASSWD)
 
@@ -1470,7 +1473,7 @@
 
   elif agent_type == "fence_sanbox2":
     try:
-      ip = form['ip_addr']
+      ip = form['ipaddr']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_IP)
     try:
@@ -1478,7 +1481,7 @@
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_LOGIN)
     try:
-      pwd = form['password']
+      pwd = form['passwd']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_PASSWD)
 
@@ -1493,7 +1496,7 @@
 
   elif agent_type == "fence_bladecenter":
     try:
-      ip = form['ip_addr']
+      ip = form['ipaddr']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_IP)
     try:
@@ -1501,7 +1504,7 @@
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_LOGIN)
     try:
-      pwd = form['password']
+      pwd = form['passwd']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_PASSWD)
 
@@ -1516,7 +1519,7 @@
 
   elif agent_type == "fence_bullpap":
     try:
-      ip = form['ip_addr']
+      ip = form['ipaddr']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_IP)
     try:
@@ -1524,7 +1527,7 @@
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_LOGIN)
     try:
-      pwd = form['password']
+      pwd = form['passwd']
     except KeyError, e:
       return (FD_VAL_FAIL, FD_PROVIDE_PASSWD)
 
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/01/10 23:49:18	1.120.2.18
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/01/15 18:21:50	1.120.2.19
@@ -1979,7 +1979,7 @@
 	31: validateResourceAdd,
 	33: validateResourceAdd,
 	51: validateFenceAdd,
-	50: validateFenceEdit,
+	54: validateFenceEdit,
 	55: validateDaemonProperties,
 	57: deleteFenceDevice,
 	58: validateNodeFenceConfig
@@ -4066,12 +4066,13 @@
     infohash['reboot_url'] = baseurl + "?pagetype=" +NODE_PROCESS + "&task=" + NODE_REBOOT + "&nodename=" + nodename + "&clustername=" + clustername
     infohash['fence_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + nodename + "&clustername=" + clustername
     infohash['delete_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_DELETE + "&nodename=" + nodename + "&clustername=" + clustername
-
-  if nodestate == NODE_INACTIVE:
+  elif nodestate == NODE_INACTIVE:
     infohash['jl_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_JOIN_CLUSTER + "&nodename=" + nodename + "&clustername=" + clustername
     infohash['reboot_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_REBOOT + "&nodename=" + nodename + "&clustername=" + clustername
     infohash['fence_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + nodename + "&clustername=" + clustername
     infohash['delete_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_DELETE + "&nodename=" + nodename + "&clustername=" + clustername
+  else:
+    infohash['fence_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + nodename + "&clustername=" + clustername
 
   #figure out current services running on this node
   svc_dict_list = list()
@@ -4196,13 +4197,13 @@
       map['reboot_url'] = baseurl + "?pagetype=" +NODE_PROCESS + "&task=" + NODE_REBOOT + "&nodename=" + name + "&clustername=" + clustername
       map['fence_it_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + name + "&clustername=" + clustername
       map['delete_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_DELETE + "&nodename=" + name + "&clustername=" + clustername
-
-    if map['status'] == NODE_INACTIVE:
+    elif map['status'] == NODE_INACTIVE:
       map['jl_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_JOIN_CLUSTER + "&nodename=" + name + "&clustername=" + clustername
       map['reboot_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_REBOOT + "&nodename=" + name + "&clustername=" + clustername
       map['fence_it_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + name + "&clustername=" + clustername
       map['delete_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_DELETE + "&nodename=" + name + "&clustername=" + clustername
-
+    else:
+      map['fence_it_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + name + "&clustername=" + clustername
 
     #figure out current services running on this node
     svc_dict_list = list()
--- conga/luci/site/luci/Extensions/conga_constants.py	2007/01/10 22:53:56	1.19.2.6
+++ conga/luci/site/luci/Extensions/conga_constants.py	2007/01/15 18:21:50	1.19.2.7
@@ -130,7 +130,7 @@
 
 POSSIBLE_REBOOT_MESSAGE = "This node is not currently responding and is probably rebooting as planned. This state should persist for 5 minutes or so..."
 
-REDIRECT_MSG = " You will be redirected in 5 seconds. Please fasten your safety restraints."
+REDIRECT_MSG = " You will be redirected in 5 seconds."
 
 
 # Homebase-specific constants



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-01-11 19:11 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-01-11 19:11 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2007-01-11 19:11:04

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 

Log message:
	give users the option to fence nodes whose ricci agents are not functioning properly

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.151&r2=1.152
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.201&r2=1.202

--- conga/luci/cluster/form-macros	2007/01/10 23:47:11	1.151
+++ conga/luci/cluster/form-macros	2007/01/11 19:11:04	1.152
@@ -2751,6 +2751,17 @@
 				<input type="submit" value="Go"/>
 				</form>
 			</td>
+
+			<td class="cluster node node_action"
+				tal:condition="python: nodeinfo['nodestate'] != '0' and nodeinfo['nodestate'] != '1'">
+				<form method="post" onSubmit="return dropdown(this.gourl)">
+				<select name="gourl">
+					<option value="">Choose a Task...</option>
+					<option tal:attributes="value nodeinfo/fence_url | nothing">Fence this node</option>
+				</select>
+				<input type="submit" value="Go"/>
+				</form>
+			</td>
 		</tr>
 
 		<tr class="cluster node info_middle">
@@ -3143,6 +3154,15 @@
 						<input type="submit" value="Go"/>
 					</form>
 				</td>
+				<td class="node node_action" tal:condition="python: nd['status'] != '0' and nd['status'] != '1'">
+					<form method="post" onSubmit="return dropdown(this.gourl)">
+						<select class="node" name="gourl">
+							<option value="">Choose a Task...</option>
+							<option tal:attributes="value nd/fence_it_url | nothing">Fence this node</option>
+						</select>
+						<input type="submit" value="Go"/>
+					</form>
+				</td>
 			</tr>
 
 			<tr class="node info_middle">
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/01/10 23:47:11	1.201
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/01/11 19:11:04	1.202
@@ -4066,12 +4066,13 @@
     infohash['reboot_url'] = baseurl + "?pagetype=" +NODE_PROCESS + "&task=" + NODE_REBOOT + "&nodename=" + nodename + "&clustername=" + clustername
     infohash['fence_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + nodename + "&clustername=" + clustername
     infohash['delete_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_DELETE + "&nodename=" + nodename + "&clustername=" + clustername
-
-  if nodestate == NODE_INACTIVE:
+  elif nodestate == NODE_INACTIVE:
     infohash['jl_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_JOIN_CLUSTER + "&nodename=" + nodename + "&clustername=" + clustername
     infohash['reboot_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_REBOOT + "&nodename=" + nodename + "&clustername=" + clustername
     infohash['fence_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + nodename + "&clustername=" + clustername
     infohash['delete_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_DELETE + "&nodename=" + nodename + "&clustername=" + clustername
+  else:
+    infohash['fence_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + nodename + "&clustername=" + clustername
 
   #figure out current services running on this node
   svc_dict_list = list()
@@ -4196,13 +4197,13 @@
       map['reboot_url'] = baseurl + "?pagetype=" +NODE_PROCESS + "&task=" + NODE_REBOOT + "&nodename=" + name + "&clustername=" + clustername
       map['fence_it_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + name + "&clustername=" + clustername
       map['delete_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_DELETE + "&nodename=" + name + "&clustername=" + clustername
-
-    if map['status'] == NODE_INACTIVE:
+    elif map['status'] == NODE_INACTIVE:
       map['jl_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_JOIN_CLUSTER + "&nodename=" + name + "&clustername=" + clustername
       map['reboot_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_REBOOT + "&nodename=" + name + "&clustername=" + clustername
       map['fence_it_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + name + "&clustername=" + clustername
       map['delete_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_DELETE + "&nodename=" + name + "&clustername=" + clustername
-
+    else:
+      map['fence_it_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + name + "&clustername=" + clustername
 
     #figure out current services running on this node
     svc_dict_list = list()



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-01-10 21:40 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-01-10 21:40 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2007-01-10 21:40:05

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 

Log message:
	more node fencing bits

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.149&r2=1.150
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.197&r2=1.198

--- conga/luci/cluster/form-macros	2007/01/10 20:02:16	1.149
+++ conga/luci/cluster/form-macros	2007/01/10 21:40:05	1.150
@@ -2184,7 +2184,7 @@
 				</td>
 			</tr>
 			<tr>
-				<td>Switch</td>
+				<td>Switch (optional)</td>
 				<td>
 					<input name="switch" type="text"
 						tal:attributes="value cur_instance/switch | nothing" />
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/01/10 20:06:26	1.197
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/01/10 21:40:05	1.198
@@ -1485,9 +1485,6 @@
 
 	try:
 		doc = minidom.parseString(form_xml)
-		forms = doc.getElementsByTagName('form')
-		if len(forms) < 1:
-			raise
 	except Exception, e:
 		luci_log.debug_verbose('vNFC5: error: %s' % str(e))
 		return (False, {'errors': ['The fence data submitted is not properly formed.']})
@@ -1504,11 +1501,30 @@
 		method_id = levels[fence_level_num - 1].getAttribute('name')
 		if not method_id:
 			raise Exception, 'No method ID'
+		fence_method = Method()
+		fence_method.addAttribute('name', str(method_id))
+		node.children[0].children[fence_level_num - 1] = fence_method
 	except Exception, e:
 		method_id = fence_level
-	
-	fence_method = Method()
-	fence_method.addAttribute('name', str(method_id))
+		fence_method = Method()
+		fence_method.addAttribute('name', str(method_id))
+
+	forms = doc.getElementsByTagName('form')
+	if len(forms) < 1:
+		delete_target = None
+		for l in levels:
+			# delete the fence level
+			if l.getAttribute('name') == method_id:
+				delete_target = l
+				break
+		if delete_target is not None:
+			try:
+				node.getChildren()[0].removeChild(l)
+			except Exception, e:
+				luci_log.debug_verbose('vNFC6a: %s: %s' % (method_id, str(e)))
+				return (False, {'errors': ['An error occurred while deleting fence method %s' % method_id ]}) 
+		else:
+			return (True, {'messages': ['No changes were made.'] })
 
 	form_hash = {}
 	for i in forms:
@@ -1609,25 +1625,32 @@
 							% (fence_form['name'], str(e)))
 						return (False, {'errors': [ 'Unable to determine the original name for the device now named %s' % fencedev_name ]})
 
-					fence_dev_list = model.getFenceDevices()
 					fencedev_obj = None
+					fence_dev_list = model.getFenceDevices()
 					for fd in fence_dev_list:
-						if fd.getAttribute('name') == 'old_name':
+						if fd.getAttribute('name') == old_name:
 							fencedev_obj = fd
-							try:
-								model.fencedevices_ptr.removeChild(fd)
-							except Exception, e:
-								luci_log.debug_verbose('VNFC8a: %s: %s' \
-									% (old_name, str(e)))
-								return (False, {'errors': [ 'Unable to remove old fence device %s' % old_name ]})
 							break
+
 					if fencedev_obj is None:
 						luci_log.debug_verbose('vNFC14: no fence device named %s was found' % old_name)
 						return (False, {'errors': ['No fence device named %s was found' % old_name ] })
+					else:
+						try:
+							model.fencedevices_ptr.removeChild(fd)
+						except Exception, e:
+							luci_log.debug_verbose('VNFC8a: %s: %s' \
+								% (old_name, str(e)))
+							return (False, {'errors': [ 'Unable to remove old fence device %s' % old_name ]})
 
 					for k in fence_form.keys():
 						if fence_form[k]:
 							fencedev_obj.addAttribute(k, str(fence_form[k]))
+
+					# Add back the tags under the method block
+					# for the fence instance
+					instance_list.append({'name': fencedev_name })
+
 		else:
 			# The user created a new fence device.
 			fencedev_name = fence_form['name']
@@ -1636,6 +1659,12 @@
 				if fence_form[k]:
 					fencedev_obj.addAttribute(k, str(fence_form[k]))
 
+			# If it's not shared, we need to create an instance form
+			# so the appropriate XML goes into the <method> block inside
+			# <node><fence>. All we need for that is the device name.
+			if not 'sharable' in fence_form:
+				instance_list.append({'name': fencedev_name })
+
 		if fencedev_obj is not None:
 			# If a device with this name exists in the model
 			# already, replace it with the current object. If
@@ -1679,9 +1708,20 @@
 					device_obj.addAttribute(k, str(inst[k]))
 			fence_method.addChild(device_obj)
 
-		try:
-			levels[fence_level_num - 1] = fence_method
-		except:
+		if len(node.getChildren()) > 0:
+			# There's already a <fence> block
+			found_target = False
+			for idx in xrange(len(levels)):
+				if levels[idx].getAttribute('name') == method_id:
+					found_target = True
+					break
+
+			if found_target is False:
+				# There's a fence block, but no relevant method
+				# block
+				node.getChildren()[0].addChild(fence_method)
+		else:
+			# There is no <fence> tag under the node yet.
 			fence_node = Fence()
 			fence_node.addChild(fence_method)
 			node.addChild(fence_node)
@@ -1699,6 +1739,7 @@
 			% str(e))
 		return (False, {'errors': [ 'An error occurred while constructing the new cluster configuration.' ]})
 
+
 	rc = getRicciAgent(self, clustername)
 	if not rc:
 		luci_log.debug_verbose('vNFC18: unable to find a ricci agent for cluster %s' % clustername)



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2007-01-06  3:29 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2007-01-06  3:29 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2007-01-06 03:29:17

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 

Log message:
	support preserving entries in cluster.conf for custom fence agents

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.142&r2=1.143
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.194&r2=1.195

--- conga/luci/cluster/form-macros	2007/01/05 23:44:10	1.142
+++ conga/luci/cluster/form-macros	2007/01/06 03:29:16	1.143
@@ -1204,6 +1204,31 @@
 	<option name="fence_manual" value="fence_manual">Manual Fencing</option>
 </div>
 
+<div metal:define-macro="fence-form-unknown"
+	tal:attributes="id cur_fencedev/name | nothing">
+
+	<div id="fence_unknown" class="fencedev">
+		<table>
+			<tr>
+				<td><strong class="cluster">Fence Type</strong></td>
+				<td>[unknown]</td>
+			</tr>
+			<tr>
+				<td>Name</td>
+				<td>
+					<span tal:replace="cur_fencedev/name | nothing" />
+				</td>
+			</tr>
+		</table>
+
+		<tal:block tal:condition="exists: cur_fencedev">
+			<input type="hidden" name="existing_device" value="1" />
+			<input type="hidden" name="old_name"
+				tal:attributes="value cur_fencedev/name | nothing" />
+		</tal:block>
+	</div>
+</div>
+
 <div metal:define-macro="fence-form-apc"
 	tal:attributes="id cur_fencedev/name | nothing">
 
@@ -4020,6 +4045,10 @@
 	<tal:block tal:condition="python: cur_fence_type == 'fence_manual'">
 		<tal:block metal:use-macro="here/form-macros/macros/fence-form-manual" />
 	</tal:block>
+
+    <tal:block tal:condition="exists:cur_fencedev/unknown">
+		<tal:block metal:use-macro="here/form-macros/macros/fence-form-unknown" />
+	</tal:block>
 </div>
 
 
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/01/05 23:44:10	1.194
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/01/06 03:29:16	1.195
@@ -3968,6 +3968,7 @@
       try:
         map['pretty_name'] = FENCE_OPTS[fencedev.getAgentType()]
       except:
+        map['unknown'] = True
         map['pretty_name'] = fencedev.getAgentType()
 
       nodes_used = list()
@@ -4082,7 +4083,11 @@
       if fd is not None:
         if fd.isShared() == False:  #Not a shared dev...build struct and add
           fencedev = {}
-          fencedev['prettyname'] = FENCE_OPTS[fd.getAgentType()]
+          try:
+            fencedev['prettyname'] = FENCE_OPTS[fd.getAgentType()]
+          except:
+            fencedev['unknown'] = True
+            fencedev['prettyname'] = fd.getAgentType()
           fencedev['isShared'] = False
           fencedev['id'] = str(major_num)
           major_num = major_num + 1
@@ -4119,7 +4124,11 @@
             continue
           else: #Shared, but not used above...so we need a new fencedev struct
             fencedev = {}
-            fencedev['prettyname'] = FENCE_OPTS[fd.getAgentType()]
+            try:
+              fencedev['prettyname'] = FENCE_OPTS[fd.getAgentType()]
+            except:
+              fencedev['unknown'] = True
+              fencedev['prettyname'] = fd.getAgentType()
             fencedev['isShared'] = True
             fencedev['cfgurl'] = baseurl + "?clustername=" + clustername + "&fencename=" + fd.getName().strip() + "&pagetype=" + FENCEDEV 
             fencedev['id'] = str(major_num)
@@ -4157,7 +4166,11 @@
         shared_struct['name'] = fd.getName().strip()
         agentname = fd.getAgentType()
         shared_struct['agent'] = agentname
-        shared_struct['prettyname'] = FENCE_OPTS[agentname]
+        try:
+          shared_struct['prettyname'] = FENCE_OPTS[agentname]
+        except:
+          shared_struct['unknown'] = True
+          shared_struct['prettyname'] = agentname
         shared1.append(shared_struct)
     map['shared1'] = shared1
 
@@ -4177,7 +4190,11 @@
       if fd is not None:
         if fd.isShared() == False:  #Not a shared dev...build struct and add
           fencedev = {}
-          fencedev['prettyname'] = FENCE_OPTS[fd.getAgentType()]
+          try:
+            fencedev['prettyname'] = FENCE_OPTS[fd.getAgentType()]
+          except:
+            fencedev['unknown'] = True
+            fencedev['prettyname'] = fd.getAgentType()
           fencedev['isShared'] = False
           fencedev['id'] = str(major_num)
           major_num = major_num + 1
@@ -4214,7 +4231,11 @@
             continue
           else: #Shared, but not used above...so we need a new fencedev struct
             fencedev = {}
-            fencedev['prettyname'] = FENCE_OPTS[fd.getAgentType()]
+            try:
+              fencedev['prettyname'] = FENCE_OPTS[fd.getAgentType()]
+            except:
+              fencedev['unknown'] = True
+              fencedev['prettyname'] = fd.getAgentType()
             fencedev['isShared'] = True
             fencedev['cfgurl'] = baseurl + "?clustername=" + clustername + "&fencename=" + fd.getName().strip() + "&pagetype=" + FENCEDEV 
             fencedev['id'] = str(major_num)
@@ -4252,7 +4273,11 @@
         shared_struct['name'] = fd.getName().strip()
         agentname = fd.getAgentType()
         shared_struct['agent'] = agentname
-        shared_struct['prettyname'] = FENCE_OPTS[agentname]
+        try:
+          shared_struct['prettyname'] = FENCE_OPTS[agentname]
+        except:
+          shared_struct['unknown'] = True
+          shared_struct['prettyname'] = agentname
         shared2.append(shared_struct)
     map['shared2'] = shared2
 
@@ -4283,6 +4308,7 @@
       try:
         fencedev['pretty_name'] = FENCE_OPTS[fd.getAgentType()]
       except:
+        fencedev['unknown'] = True
         fencedev['pretty_name'] = fd.getAgentType()
       fencedev['agent'] = fd.getAgentType()
       #Add config url for this fencedev



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2006-12-14 23:14 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2006-12-14 23:14 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2006-12-14 23:14:55

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 

Log message:
	- fencing updates
	- fix a typo in multicast address configuration

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.131&r2=1.132
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.183&r2=1.184

--- conga/luci/cluster/form-macros	2006/12/14 18:22:53	1.131
+++ conga/luci/cluster/form-macros	2006/12/14 23:14:54	1.132
@@ -1156,44 +1156,55 @@
 				<td>Name</td>
 				<td>
 					<input name="name" type="text"
-						tal:attributes="value cur_fencedev/name | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/name | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>IP Address</td>
 				<td>
 					<input name="ip_addr" type="text"
-						tal:attributes="value cur_fencedev/ipaddr | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/ipaddr | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>Login</td>
 				<td>
 					<input name="login" type="text"
-						tal:attributes="value cur_fencedev/login | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/login | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>Password</td>
 				<td>
 					<input name="password" type="password" autocomplete="off"
-						tal:attributes="value cur_fencedev/passwd | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/passwd | nothing" />
 				</td>
 			</tr>
 		</table>
+
 		<div name="instances">
 			<tal:block tal:condition="exists: cur_fence_instances">
-				<tal:block tal:repeat="cur_fence_instance cur_fence_instances">
+				<tal:block tal:repeat="cur_instance cur_fence_instances">
 					<tal:block
 						metal:use-macro="here/form-macros/macros/fence-instance-form-apc" />
 				</tal:block>
 			</tal:block>
 		</div>
+
 		<tal:block tal:condition="exists: cur_fencedev">
 			<input type="hidden" name="existing_device" value="1" />
 			<input type="hidden" name="old_name"
 				tal:attributes="value cur_fencedev/name | nothing" />
 		</tal:block>
+
 		<input type="hidden" name="sharable" value="1" />
 		<input type="hidden" name="fence_type" value="fence_apc" />
 	</div>
@@ -1212,44 +1223,55 @@
 				<td>Name</td>
 				<td>
 					<input name="name" type="text"
-						tal:attributes="value cur_fencedev/name | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/name | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>IP Address</td>
 				<td>
 					<input name="ip_addr" type="text"
-						tal:attributes="value cur_fendev/ipaddr | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fendev/ipaddr | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>Login</td>
 				<td>
 					<input name="login" type="text"
-						tal:attributes="value cur_fencedev/login | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/login | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>Password</td>
 				<td>
 					<input name="password" type="password" autocomplete="off"
-						tal:attributes="value cur_fencedev/passwd | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/passwd | nothing" />
 				</td>
 			</tr>
 		</table>
+
 		<div name="instances">
 			<tal:block tal:condition="exists: cur_fence_instances">
-				<tal:block tal:repeat="cur_fence_instance cur_fence_instances">
+				<tal:block tal:repeat="cur_instance cur_fence_instances">
 					<tal:block
 						metal:use-macro="here/form-macros/macros/fence-instance-form-mcdata" />
 				</tal:block>
 			</tal:block>
 		</div>
+
 		<tal:block tal:condition="exists: cur_fencedev">
 			<input type="hidden" name="existing_device" value="1" />
 			<input type="hidden" name="old_name"
 				tal:attributes="value cur_fencedev/name | nothing" />
 		</tal:block>
+
 		<input type="hidden" name="sharable" value="1" />
 		<input type="hidden" name="fence_type" value="fence_mcdata" />
 	</div>
@@ -1268,37 +1290,46 @@
 				<td>Name</td>
 				<td>
 					<input name="name" type="text"
-						tal:attributes="value cur_fencedev/name | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/name | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>IP Address</td>
 				<td>
 					<input name="ip_addr" type="text"
-						tal:attributes="value cur_fencedev/ipaddr | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/ipaddr | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>Password</td>
 				<td>
 					<input name="password" type="password" autocomplete="off"
-						tal:attributes="value cur_fencedev/passwd | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/passwd | nothing" />
 				</td>
 			</tr>
 		</table>
+
 		<div name="instances">
 			<tal:block tal:condition="exists: cur_fence_instances">
-				<tal:block tal:repeat="cur_fence_instance cur_fence_instances">
+				<tal:block tal:repeat="cur_instance cur_fence_instances">
 					<tal:block
 						metal:use-macro="here/form-macros/macros/fence-instance-form-wti" />
 				</tal:block>
 			</tal:block>
 		</div>
+
 		<tal:block tal:condition="exists: cur_fencedev">
 			<input type="hidden" name="existing_device" value="1" />
 			<input type="hidden" name="old_name"
 				tal:attributes="value cur_fencedev/name | nothing" />
 		</tal:block>
+
 		<input type="hidden" name="sharable" value="1" />
 		<input type="hidden" name="fence_type" value="fence_wti" />
 	</div>
@@ -1342,11 +1373,13 @@
 				</td>
 			</tr>
 		</table>
+
 		<tal:block tal:condition="exists: cur_fencedev">
 			<input type="hidden" name="existing_device" value="1" />
 			<input type="hidden" name="old_name"
 				tal:attributes="value cur_fencedev/name | nothing" />
 		</tal:block>
+
 		<input type="hidden" name="fence_type" value="fence_ilo" />
 	</div>
 </div>
@@ -1388,11 +1421,13 @@
 						tal:attributes="value cur_fencedev/passwd | nothing" />
 				</td>
 		</table>
+
 		<tal:block tal:condition="exists: cur_fencedev">
 			<input type="hidden" name="existing_device" value="1" />
 			<input type="hidden" name="old_name"
 				tal:attributes="value cur_fencedev/name | nothing" />
 		</tal:block>
+
 		<input type="hidden" name="fence_type" value="fence_drac" />
 	</div>
 </div>
@@ -1435,11 +1470,13 @@
 				</td>
 			</tr>
 		</table>
+
 		<tal:block tal:condition="exists: cur_fencedev">
 			<input type="hidden" name="existing_device" value="1" />
 			<input type="hidden" name="old_name"
 				tal:attributes="value cur_fencedev/name | nothing" />
 		</tal:block>
+
 		<input type="hidden" name="fence_type" value="fence_rsa" />
 	</div>
 </div>
@@ -1457,44 +1494,55 @@
 				<td>Name</td>
 				<td>
 					<input name="name" type="text"
-						tal:attributes="value cur_fencedev/name | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/name | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>IP Address</td>
 				<td>
 					<input name="ip_addr" type="text"
-						tal:attributes="value cur_fencedev/ipaddr | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/ipaddr | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>Login</td>
 				<td>
 					<input name="login" type="text"
-						tal:attributes="value cur_fencedev/login | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/login | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>Password</td>
 				<td>
 					<input name="password" type="password" autocomplete="off"
-						tal:attributes="value cur_fencedev/passwd | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/passwd | nothing" />
 				</td>
 			</tr>
 		</table>
+
 		<div name="instances">
 			<tal:block tal:condition="exists: cur_fence_instances">
-				<tal:block tal:repeat="cur_fence_instance cur_fence_instances">
+				<tal:block tal:repeat="cur_instance cur_fence_instances">
 					<tal:block
 						metal:use-macro="here/form-macros/macros/fence-instance-form-brocade" />
 				</tal:block>
 			</tal:block>
 		</div>
+
 		<tal:block tal:condition="exists: cur_fencedev">
 			<input type="hidden" name="existing_device" value="1" />
 			<input type="hidden" name="old_name"
 				tal:attributes="value cur_fencedev/name | nothing" />
 		</tal:block>
+
 		<input type="hidden" name="sharable" value="1" />
 		<input type="hidden" name="fence_type" value="fence_brocade" />
 	</div>
@@ -1513,7 +1561,9 @@
 				<td>Name</td>
 				<td>
 					<input name="name" type="text"
-						tal:attributes="value cur_fencedev/name | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/name | nothing" />
 				</td>
 			</tr>
 			<tr>
@@ -1524,30 +1574,37 @@
 				<td>Login</td>
 				<td>
 					<input name="login" type="text"
-						tal:attributes="value cur_fencedev/login | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/login | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>Password</td>
 				<td>
 					<input name="password" type="password" autocomplete="off"
-						tal:attributes="value cur_fencedev/passwd | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/passwd | nothing" />
 				</td>
 			</tr>
 		</table>
+
 		<div name="instances">
 			<tal:block tal:condition="exists: cur_fence_instances">
-				<tal:block tal:repeat="cur_fence_instance cur_fence_instances">
+				<tal:block tal:repeat="cur_instance cur_fence_instances">
 					<tal:block
 						metal:use-macro="here/form-macros/macros/fence-instance-form-sanbox2" />
 				</tal:block>
 			</tal:block>
 		</div>
+
 		<tal:block tal:condition="exists: cur_fencedev">
 			<input type="hidden" name="existing_device" value="1" />
 			<input type="hidden" name="old_name"
 				tal:attributes="value cur_fencedev/name | nothing" />
 		</tal:block>
+
 		<input type="hidden" name="sharable" value="1" />
 		<input type="hidden" name="fence_type" value="fence_sanbox2" />
 	</div>
@@ -1566,37 +1623,46 @@
 				<td>Name</td>
 				<td>
 					<input name="name" type="text"
-						tal:attributes="value cur_fencedev/name | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/name | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>IP Address</td>
 				<td>
 					<input name="ip_addr" type="text"
-						tal:attributes="value cur_fencedev/ipaddr | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/ipaddr | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>Password</td>
 				<td>
 					<input name="password" type="password" autocomplete="off"
-						tal:attributes="value cur_fencedev/passwd | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/passwd | nothing" />
 				</td>
 			</tr>
 		</table>
+
 		<div name="instances">
 			<tal:block tal:condition="exists: cur_fence_instances">
-				<tal:block tal:repeat="cur_fence_instance cur_fence_instances">
+				<tal:block tal:repeat="cur_instance cur_fence_instances">
 					<tal:block
 						metal:use-macro="here/form-macros/macros/fence-instance-form-brocade" />
 				</tal:block>
 			</tal:block>
 		</div>
+
 		<tal:block tal:condition="exists: cur_fencedev">
 			<input type="hidden" name="existing_device" value="1" />
 			<input type="hidden" name="old_name"
 				tal:attributes="value cur_fencedev/name | nothing" />
 		</tal:block>
+
 		<input type="hidden" name="sharable" value="1" />
 		<input type="hidden" name="fence_type" value="fence_vixel" />
 	</div>
@@ -1615,30 +1681,37 @@
 				<td>Name</td>
 				<td>
 					<input name="name" type="text"
-						tal:attributes="value cur_fencedev/name | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/name | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>Servers (whitespace separated list)</td>
 				<td>
 					<input name="servers" type="text"
-						tal:attributes="value cur_fencedev/servers | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/servers | nothing" />
 				</td>
 			</tr>
 		</table>
+
 		<div name="instances">
 			<tal:block tal:condition="exists: cur_fence_instances">
-				<tal:block tal:repeat="cur_fence_instance cur_fence_instances">
+				<tal:block tal:repeat="cur_instance cur_fence_instances">
 					<tal:block
 						metal:use-macro="here/form-macros/macros/fence-instance-form-gnbd" />
 				</tal:block>
 			</tal:block>
 		</div>
+
 		<tal:block tal:condition="exists: cur_fencedev">
 			<input type="hidden" name="existing_device" value="1" />
 			<input type="hidden" name="old_name"
 				tal:attributes="value cur_fencedev/name | nothing" />
 		</tal:block>
+
 		<input type="hidden" name="sharable" value="1" />
 		<input type="hidden" name="fence_type" value="fence_gnbd" />
 	</div>
@@ -1657,37 +1730,46 @@
 				<td>Name</td>
 				<td>
 					<input name="name" type="text"
-						tal:attributes="value cur_fencedev/name | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/name | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>CServer</td>
 				<td>
 					<input name="cserver" type="text"
-						tal:attributes="value cur_fencedev/cserver | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/cserver | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>ESH Path (Optional)</td>
 				<td>
 					<input name="login" type="text"
-						tal:attributes="value cur_fencedev/login | string:/opt/pan-mgr/bin/esh" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/login | string:/opt/pan-mgr/bin/esh" />
 				</td>
 			</tr>
 		</table>
+
 		<div name="instances">
 			<tal:block tal:condition="exists: cur_fence_instances">
-				<tal:block tal:repeat="cur_fence_instance cur_fence_instances">
+				<tal:block tal:repeat="cur_instance cur_fence_instances">
 					<tal:block
 						metal:use-macro="here/form-macros/macros/fence-instance-form-egenera" />
 				</tal:block>
 			</tal:block>
 		</div>
+
 		<tal:block tal:condition="exists: cur_fencedev">
 			<input type="hidden" name="existing_device" value="1" />
 			<input type="hidden" name="old_name"
 				tal:attributes="value cur_fencedev/name | nothing" />
 		</tal:block>
+
 		<input type="hidden" name="sharable" value="1" />
 		<input type="hidden" name="fence_type" value="fence_egenera" />
 	</div>
@@ -1731,11 +1813,13 @@
 				</td>
 			</tr>
 		</table>
+
 		<tal:block tal:condition="exists: cur_fencedev">
 			<input type="hidden" name="existing_device" value="1" />
 			<input type="hidden" name="old_name"
 				tal:attributes="value cur_fencedev/name | nothing" />
 		</tal:block>
+
 		<input type="hidden" name="fence_type" value="fence_bladecenter" />
 	</div>
 </div>
@@ -1753,44 +1837,55 @@
 				<td>Name</td>
 				<td>
 					<input name="name" type="text"
-						tal:attributes="value cur_fencedev/name | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/name | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>IP Address</td>
 				<td>
 					<input name="ip_addr" type="text"
-						tal:attributes="value cur_fencedev/ipaddr | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/ipaddr | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>Login</td>
 				<td>
 					<input name="login" type="text"
-						tal:attributes="value cur_fencedev/login | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/login | nothing" />
 				</td>
 			</tr>
 			<tr>
 				<td>Password</td>
 				<td>
 					<input name="password" type="password" autocomplete="off"
-						tal:attributes="value cur_fencedev/passwd | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/passwd | nothing" />
 				</td>
 			</tr>
 		</table>
+
 		<div name="instances">
 			<tal:block tal:condition="exists: cur_fence_instances">
-				<tal:block tal:repeat="cur_fence_instance cur_fence_instances">
+				<tal:block tal:repeat="cur_instance cur_fence_instances">
 					<tal:block
 						metal:use-macro="here/form-macros/macros/fence-instance-form-bullpap" />
 				</tal:block>
 			</tal:block>
 		</div>
+
 		<tal:block tal:condition="exists: cur_fencedev">
 			<input type="hidden" name="existing_device" value="1" />
 			<input type="hidden" name="old_name"
 				tal:attributes="value cur_fencedev/name | nothing" />
 		</tal:block>
+
 		<input type="hidden" name="sharable" value="1" />
 		<input type="hidden" name="fence_type" value="fence_bullpap" />
 	</div>
@@ -1827,11 +1922,13 @@
 				</td>
 			</tr>
 		</table>
+
 		<tal:block tal:condition="exists: cur_fencedev">
 			<input type="hidden" name="existing_device" value="1" />
 			<input type="hidden" name="old_name"
 				tal:attributes="value cur_fencedev/name | nothing" />
 		</tal:block>
+
 		<input type="hidden" name="fence_type" value="fence_rps10" />
 	</div>
 </div>
@@ -1849,23 +1946,28 @@
 				<td>Name</td>
 				<td>
 					<input name="name" type="text"
-						tal:attributes="value cur_fencedev/name | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/name | nothing" />
 				</td>
 			</tr>
 		</table>	
+
 		<div name="instances">
 			<tal:block tal:condition="exists: cur_fence_instances">
-				<tal:block tal:repeat="cur_fence_instance cur_fence_instances">
+				<tal:block tal:repeat="cur_instance cur_fence_instances">
 					<tal:block
 						metal:use-macro="here/form-macros/macros/fence-instance-form-xvm" />
 				</tal:block>
 			</tal:block>
 		</div>
+
 		<tal:block tal:condition="exists: cur_fencedev">
 			<input type="hidden" name="existing_device" value="1" />
 			<input type="hidden" name="old_name"
 				tal:attributes="value cur_fencedev/name | nothing" />
 		</tal:block>
+
 		<input type="hidden" name="sharable" value="1" />
 		<input type="hidden" name="fence_type" value="xvm" />
 	</div>
@@ -1884,15 +1986,19 @@
 				<td>Name</td>
 				<td>
 					<input name="name" type="text"
-						tal:attributes="value cur_fencedev/name | nothing" />
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/name | nothing" />
 				</td>
 			</tr>
 		</table>
+
 		<tal:block tal:condition="exists: cur_fencedev">
 			<input type="hidden" name="existing_device" value="1" />
 			<input type="hidden" name="old_name"
 				tal:attributes="value cur_fencedev/name | nothing" />
 		</tal:block>
+
 		<input type="hidden" name="sharable" value="1" />
 		<input type="hidden" name="fence_type" value="scsi" />
 	</div>
@@ -1940,11 +2046,13 @@
 				<td><input name="auth_type" type="text" Title="Options are to leave blank for none, password, md2, or md5"/></td>
 			</tr>
 		</table>
+
 		<tal:block tal:condition="exists: cur_fencedev">
 			<input type="hidden" name="existing_device" value="1" />
 			<input type="hidden" name="old_name"
 				tal:attributes="value cur_fencedev/name | nothing" />
 		</tal:block>
+
 		<input type="hidden" name="fence_type" value="fence_ipmilan" />
 	</div>
 </div>
@@ -2508,6 +2616,7 @@
 					<tal:block tal:condition="exists: fenceinfo/level1">
 						<tal:block tal:repeat="cur_fencedev fenceinfo/level1">
 							<tal:block tal:define="
+								cur_fence_instances cur_fencedev/instance_list | nothing;
 								cur_fence_type cur_fencedev/agent | nothing;
 								cur_fence_level python: 1;">
 								<div tal:attributes="id python: 'fence1_' + str(cur_fence_num)">
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2006/12/14 21:37:15	1.183
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2006/12/14 23:14:54	1.184
@@ -726,14 +726,16 @@
 
 	if mcast_manual == True:
 		try:
-			addr_str = form['mcast_addr'].strip()
+			addr_str = form['mcast_address'].strip()
 			socket.inet_pton(socket.AF_INET, addr_str)
 		except KeyError, e:
+			addr_str = None
 			errors.append('No multicast address was given')
 		except socket.error, e:
 			try:
 				socket.inet_pton(socket.AF_INET6, addr_str)
 			except socket.error, e:
+				addr_str = None
 				errors.append('An invalid multicast address was given: %s')
 	else:
 		addr_str = None
@@ -745,6 +747,7 @@
 	try:
 		model.usesMulticast = True
 		model.mcast_address = addr_str
+		model.isModified = True
 	except Exception, e:
 		luci_log.debug('Error updating mcast properties: %s' % str(e))
 		errors.append('Unable to update cluster multicast properties')
@@ -3045,7 +3048,7 @@
 
 	if delete_cluster:
 		try:
-			set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_DELETE, "Deleting cluster \"%s\": Deleting node \'%s\'" \
+			set_node_flag(self, clustername, rc.hostname(), str(batch_number), CLUSTER_DELETE, "Deleting cluster \"%s\": Deleting node \'%s\'" \
 				% (clustername, nodename_resolved))
 		except Exception, e:
 			luci_log.debug_verbose('ND5a: failed to set flags: %s' % str(e))



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2006-12-14 18:22 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2006-12-14 18:22 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2006-12-14 18:22:53

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 

Log message:
	display as much information as possible when getting cluster from ricci info fails

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.130&r2=1.131
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.181&r2=1.182

--- conga/luci/cluster/form-macros	2006/12/13 23:54:19	1.130
+++ conga/luci/cluster/form-macros	2006/12/14 18:22:53	1.131
@@ -76,7 +76,7 @@
 
 	<tal:block tal:condition="python: ricci_agent">
 		<tal:block tal:define="
-			global stat python: here.getClusterStatus(request, ricci_agent);
+			global stat python: here.getClusterStatus(request, ricci_agent, cluname=clu[0]);
 			global cstatus python: here.getClustersInfo(stat, request);
 			global cluster_status python: 'cluster ' + (('running' in cstatus and cstatus['running'] == 'true') and 'running' or 'stopped');"
 	 	/>
@@ -84,7 +84,7 @@
 	<table class="cluster" width="100%">
 	<tr class="cluster info_top">
 		<td class="cluster cluster_name">
-			<strong class="cluster cluster_name">Cluster Name</strong>:
+			<strong class="cluster cluster_name">Cluster Name:</strong>
 			<a href=""
 				tal:attributes="href cstatus/clucfg | nothing;
 								class python: 'cluster ' + cluster_status;"
@@ -124,7 +124,7 @@
 	<tr class="cluster">
 		<td tal:condition="exists: cstatus/error" class="cluster">
 			<span class="errmsgs">
-				An error occurred while attempting to get status information for this cluster. The information shown may be out of date.
+				An error occurred while attempting to get status information for this cluster. The information shown may be stale or inaccurate.
 			</span>
 		</td>
 	</tr>
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2006/12/14 17:02:56	1.181
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2006/12/14 18:22:53	1.182
@@ -1862,32 +1862,42 @@
 		except:
 			try:
 				hostname = node[0]
-			except:
+			except Exception, e:
+				luci_log.debug_verbose('GRA2a: %s' % str(e))
 				continue
 
 		try:
 			rc = RicciCommunicator(hostname)
-		except RicciError, e:
+			if not rc:
+				raise Exception, 'rc is None'
+			ricci_hostname = rc.hostname()
+			if not ricci_hostname:
+				raise Exception, 'ricci_hostname is blank'
+		except Exception, e:
 			luci_log.debug('GRA3: ricci error: %s' % str(e))
 			continue
 
 		try:
 			clu_info = rc.cluster_info()
 		except Exception, e:
-			luci_log.debug('GRA4: cluster_info error: %s' % str(e))
+			luci_log.debug('GRA4: cluster_info error for %s: %s' \
+				% (ricci_hostname, str(e)))
+			continue
 
 		try:
 			cur_name = str(clu_info[0]).strip().lower()
 			if not cur_name:
-				raise
-		except:
+				raise Exception, 'cluster name is none for %s' % ricci_hostname
+		except Exception, e:
+			luci_log.debug_verbose('GRA4a: %s' % str(e))
 			cur_name = None
 
 		try:
 			cur_alias = str(clu_info[1]).strip().lower()
 			if not cur_alias:
-				raise
-		except:
+				raise Exception, 'cluster alias is none'
+		except Exception, e:
+			luci_log.debug_verbose('GRA4b: %s' % str(e))
 			cur_alias = None
 			
 		if (cur_name is not None and cluname != cur_name) and (cur_alias is not None and cluname != cur_alias):
@@ -1899,14 +1909,20 @@
 				pass
 			continue
 
-		if rc.authed():
-			return rc
 		try:
-			setNodeFlag(node[1], CLUSTER_NODE_NEED_AUTH)
-		except:
-			pass
+			if rc.authed():
+				return rc
 
-	luci_log.debug('GRA6: no ricci agent could be found for cluster %s' \
+			try:
+				setNodeFlag(node[1], CLUSTER_NODE_NEED_AUTH)
+			except:
+				pass
+			raise Exception, '%s not authed' % rc.hostname()
+		except Exception, e:
+			luci_log.debug_verbose('GRA6: %s' % str(e))
+			continue
+
+	luci_log.debug('GRA7: no ricci agent could be found for cluster %s' \
 		% cluname)
 	return None
 
@@ -1995,10 +2011,11 @@
 	results.append(vals)
 
 	try:
-		cluster_path = '%s/luci/systems/cluster/%s' % (CLUSTER_FOLDER_PATH, clustername)
+		cluster_path = CLUSTER_FOLDER_PATH + clustername
 		nodelist = self.restrictedTraverse(cluster_path).objectItems('Folder')
 	except Exception, e:
-		luci_log.debug_verbose('GCSDB0: %s: %s' % (clustername, str(e)))
+		luci_log.debug_verbose('GCSDB0: %s -> %s: %s' \
+			% (clustername, cluster_path, str(e)))
 		return results
 
 	for node in nodelist:
@@ -2014,7 +2031,7 @@
 			luci_log.debug_verbose('GCSDB1: %s' % str(e))
 	return results
 
-def getClusterStatus(self, request, rc):
+def getClusterStatus(self, request, rc, cluname=None):
 	try:
 		doc = getClusterStatusBatch(rc)
 		if not doc:
@@ -2036,14 +2053,15 @@
 
 	if not doc:
 		try:
-			clustername = None
-			try:
-				clustername = request['clustername']
-			except:
+			clustername = cluname
+			if clustername is None:
 				try:
-					clustername = request.form['clustername']
+					clustername = request['clustername']
 				except:
-					pass
+					try:
+						clustername = request.form['clustername']
+					except:
+						pass
 
 			if not clustername:
 				raise Exception, 'unable to determine cluster name'
@@ -2649,6 +2667,7 @@
   svclist = list()
   clulist = list()
   baseurl = req['URL']
+
   for item in status:
     if item['type'] == "node":
       nodelist.append(item)
@@ -2698,6 +2717,7 @@
       svc_dict_list.append(svc_dict)
   map['currentservices'] = svc_dict_list
   node_dict_list = list()
+
   for item in nodelist:
     nmap = {}
     name = item['name']
@@ -2713,7 +2733,6 @@
     node_dict_list.append(nmap)
 
   map['currentnodes'] = node_dict_list
-
   return map
 
 def nodeLeave(self, rc, clustername, nodename_resolved):



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2006-12-11 22:42 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2006-12-11 22:42 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2006-12-11 22:42:35

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 

Log message:
	more fixes for bz219156

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.127&r2=1.128
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.179&r2=1.180

--- conga/luci/cluster/form-macros	2006/12/11 21:51:13	1.127
+++ conga/luci/cluster/form-macros	2006/12/11 22:42:34	1.128
@@ -2781,11 +2781,19 @@
 
 <div metal:define-macro="nodeprocess-form">
 	<tal:block
-		tal:define="result python: here.nodeTaskProcess(modelb, request)"/>
+		tal:define="result python: here.nodeTaskProcess(modelb, request)">
 
-	<div>
-		<span tal:replace="result | nothing" />
-	</div>
+		<div id="errmsgsdiv" class="errmsgs"
+			tal:condition="python: result and len(result) > 1 and 'errors' in result[1]">
+            <p class="errmsgs">The following errors occurred:</p>
+
+            <ul class="errmsgs">
+                <tal:block tal:repeat="e python: result[1]['errors']">
+                    <li class="errmsgs" tal:content="python:e" />
+                </tal:block>
+            </ul>
+        </div>
+	</tal:block>
 </div>
 
 <div metal:define-macro="services-form">
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2006/12/11 21:51:14	1.179
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2006/12/11 22:42:34	1.180
@@ -269,9 +269,8 @@
 				% (key, str(e)))
 
 def validateAddClusterNode(self, request):
-	errors = list()
-	messages = list()
 	requestResults = {}
+	errors = list()
 
 	try:
 		sessionData = request.SESSION.get('checkRet')
@@ -411,7 +410,7 @@
 			except Exception, e:
 				luci_log.debug_verbose('vACN7: %s' % str(e))
 				pass
-		next_node_id = 1;
+		next_node_id = 1
 		for i in nodeList:
 			next_node_id += 1
 			new_node = ClusterNode()
@@ -2662,9 +2661,7 @@
   if len(clulist) < 1:
     return {}
   clu = clulist[0]
-  cluerror = False
   if 'error' in clu:
-    cluerror = True
     map['error'] = True
   clustername = clu['name']
   if clu['alias'] != "":
@@ -2702,7 +2699,6 @@
   map['currentservices'] = svc_dict_list
   node_dict_list = list()
   for item in nodelist:
-    node_error = 'error' in item
     nmap = {}
     name = item['name']
     nmap['nodename'] = name
@@ -3034,30 +3030,30 @@
 def nodeTaskProcess(self, model, request):
 	try:
 		clustername = request['clustername']
-	except KeyError, e:
+	except:
 		try:
 			clustername = request.form['clustername']
 		except:
-			luci_log.debug('missing cluster name for NTP')
-			return None
+			luci_log.debug('NTP0: missing cluster name')
+			return (False, {'errors': [ 'No cluster name was given.' ]})
 
 	try:
 		nodename = request['nodename']
-	except KeyError, e:
+	except:
 		try:
 			nodename = request.form['nodename']
 		except:
-			luci_log.debug('missing nodename name for NTP')
-			return None
+			luci_log.debug('NTP1: missing node name')
+			return (False, {'errors': [ 'No node name was given.' ]})
 
 	try:
 		task = request['task']
-	except KeyError, e:
+	except:
 		try:
 			task = request.form['task']
 		except:
-			luci_log.debug('missing task for NTP')
-			return None
+			luci_log.debug('NTP2: missing task')
+			return (False, {'errors': [ 'No node task was given.' ]})
 
 	nodename_resolved = resolve_nodename(self, clustername, nodename)
 
@@ -3067,24 +3063,27 @@
 		# to be performed.
 		try:
 			rc = RicciCommunicator(nodename_resolved)
+			if not rc:
+				raise Exception, 'rc is None'
 		except RicciError, e:
-			luci_log.debug('ricci error from %s: %s' \
+			luci_log.debug('NTP3: ricci error from %s: %s' \
 				% (nodename_resolved, str(e)))
-			return None
+			return (False, {'errors': [ 'Unable to connect to the ricci agent on %s.' % nodename_resolved ]})
 		except:
-			return None
+			luci_log.debug('NTP4: ricci error from %s: %s' \
+				% (nodename_resolved, str(e)))
+			return (False, {'errors': [ 'Unable to connect to the ricci agent on %s.' % nodename_resolved ]})
 
 		cluinfo = rc.cluster_info()
 		if not cluinfo[0] and not cluinfo[1]:
-			luci_log.debug('host %s not in a cluster (expected %s)' \
+			luci_log.debug('NTP5: node %s not in a cluster (expected %s)' \
 				% (nodename_resolved, clustername))
-			return None
+			return (False, {'errors': [ 'Node %s reports it is not in a cluster.' % nodename_resolved ]})
 
 		cname = lower(clustername)
 		if cname != lower(cluinfo[0]) and cname != lower(cluinfo[1]):
-			luci_log.debug('host %s in unknown cluster %s:%s (expected %s)' \
-				% (nodename_resolved, cluinfo[0], cluinfo[1], clustername))
-			return None
+			luci_log.debug('NTP6: node %s in unknown cluster %s:%s (expected %s)' % (nodename_resolved, cluinfo[0], cluinfo[1], clustername))
+			return (False, {'errors': [ 'Node %s reports it in cluster \"%s\". We expect it to be a member of cluster \"%s\"' % (nodename_resolved, cluinfo[0], clustername) ]})
 
 		if not rc.authed():
 			rc = None
@@ -3103,40 +3102,45 @@
 				pass
 
 		if rc is None:
-			return None
+			luci_log.debug('NTP7: node %s is not authenticated' \
+				% nodename_resolved)
+			return (False, {'errors': [ 'Node %s is not authenticated' % nodename_resolved ]})
 
 	if task == NODE_LEAVE_CLUSTER:
 		if nodeLeave(self, rc, clustername, nodename_resolved) is None:
-			luci_log.debug_verbose('NTP: nodeLeave failed')
-			return None
+			luci_log.debug_verbose('NTP8: nodeLeave failed')
+			return (False, {'errors': [ 'Node %s failed to leave cluster %s' % (nodename_resolved, clustername) ]})
 
 		response = request.RESPONSE
 		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
 	elif task == NODE_JOIN_CLUSTER:
 		if nodeJoin(self, rc, clustername, nodename_resolved) is None:
-			luci_log.debug_verbose('NTP: nodeJoin failed')
-			return None
+			luci_log.debug_verbose('NTP9: nodeJoin failed')
+			return (False, {'errors': [ 'Node %s failed to join cluster %s' % (nodename_resolved, clustername) ]})
 
 		response = request.RESPONSE
 		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
 	elif task == NODE_REBOOT:
 		if forceNodeReboot(self, rc, clustername, nodename_resolved) is None:
-			luci_log.debug_verbose('NTP: nodeReboot failed')
-			return None
+			luci_log.debug_verbose('NTP10: nodeReboot failed')
+			return (False, {'errors': [ 'Node %s failed to reboot' \
+				% nodename_resolved ]})
 
 		response = request.RESPONSE
 		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
 	elif task == NODE_FENCE:
 		if forceNodeFence(self, clustername, nodename, nodename_resolved) is None:
-			luci_log.debug_verbose('NTP: nodeFencefailed')
-			return None
+			luci_log.debug_verbose('NTP11: nodeFencefailed')
+			return (False, {'errors': [ 'Fencing of node %s failed.' \
+				% nodename_resolved]})
 
 		response = request.RESPONSE
 		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
 	elif task == NODE_DELETE:
 		if nodeDelete(self, rc, model, clustername, nodename, nodename_resolved) is None:
-			luci_log.debug_verbose('NTP: nodeDelete failed')
-			return None
+			luci_log.debug_verbose('NTP12: nodeDelete failed')
+			return (False, {'errors': [ 'Deletion of node %s from cluster %s failed.' % (nodename_resolved, clustername) ]})
+
 		response = request.RESPONSE
 		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
 



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2006-12-11 21:51 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2006-12-11 21:51 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2006-12-11 21:51:14

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 

Log message:
	fixes for bz219156

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.126&r2=1.127
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.178&r2=1.179

--- conga/luci/cluster/form-macros	2006/12/08 20:47:37	1.126
+++ conga/luci/cluster/form-macros	2006/12/11 21:51:13	1.127
@@ -2318,14 +2318,20 @@
 				<form method="post" onSubmit="return dropdown(this.gourl)">
 				<select name="gourl">
 					<option value="">Choose a Task...</option>
-					<option tal:attributes="value nodeinfo/jl_url">
+					<option tal:attributes="value nodeinfo/jl_url"
+						tal:condition="python: not 'ricci_error' in nodeinfo">
 						Have node <span tal:replace="python: nodeinfo['nodestate'] == '0' and 'leave' or 'join'" /> cluster
 					</option>
 					<option value="">----------</option>
 					<option tal:attributes="value nodeinfo/fence_url">Fence this node</option>
-					<option value="" tal:attributes="value nodeinfo/reboot_url">Reboot this node</option>
+					<option value="" tal:attributes="value nodeinfo/reboot_url"
+						tal:condition="python: not 'ricci_error' in nodeinfo">
+						Reboot this node
+					</option>
 					<option value="">----------</option>
-					<option tal:attributes="value nodeinfo/delete_url">Delete this node</option>
+					<option tal:attributes="value nodeinfo/delete_url"
+						tal:condition="python: not 'ricci_error' in nodeinfo">
+						Delete this node</option>
 				</select>
 				<input type="submit" value="Go"/>
 				</form>
@@ -2342,7 +2348,7 @@
 		</tr>
 
 		<tr class="cluster node info_bottom"
-			tal:condition="python: nodeinfo['nodestate'] == '0' or nodeinfo['nodestate'] == '1'">
+			tal:condition="python: (nodeinfo['nodestate'] == '0' or nodeinfo['nodestate'] == '1') and not 'ricci_error' in nodeinfo">
 			<td class="cluster node node_log" colspan="2">
 				<a class="cluster node"
 					tal:attributes="href nodeinfo/logurl" onClick="return popup_log(this, 'notes')">
@@ -2352,6 +2358,7 @@
 		</tr>
 	</table>
 
+	<tal:block tal:condition="python: not 'ricci_error' in nodeinfo">
 	<hr/>
 
 	<tal:block
@@ -2445,13 +2452,6 @@
 		global fenceinfo python: here.getFenceInfo(modelb, request);
 		global fencedevinfo python: here.getFencesInfo(modelb, request)" />
 
-	<div>
-		fenceinfo:
-		<span tal:replace="fenceinfo" /><br/>
-		fencedevinfo:
-		<span tal:replace="fencedevinfo" />
-	</div>
-
 	<div class="invisible" id="shared_fence_devices">
 		<tal:block tal:repeat="cur_fencedev fencedevinfo/fencedevs">
 			<tal:block metal:use-macro="here/form-macros/macros/shared-fence-device-list" />
@@ -2557,6 +2557,13 @@
 		</tr>
 		</tbody>
 	</table>
+	</tal:block>
+	<tal:block tal:condition="python: 'ricci_error' in nodeinfo">
+		<hr/>
+		<strong class="errmsgs">
+			The ricci agent for this node is unresponsive. Node-specific information is not available at this time.
+		</strong>
+	</tal:block>
 </div>
 
 <div metal:define-macro="nodes-form">
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2006/12/08 23:02:49	1.178
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2006/12/11 21:51:14	1.179
@@ -3231,6 +3231,7 @@
         raise Exception, 'rc is none'
     except Exception, e:
       rc = None
+      infohash['ricci_error'] = True
       luci_log.info('Error connecting to %s: %s' \
           % (nodename_resolved, str(e)))
 
@@ -3242,6 +3243,8 @@
       dlist.append("rgmanager")
       states = getDaemonStates(rc, dlist)
       infohash['d_states'] = states
+  else:
+    infohash['ricci_error'] = True
 
   infohash['logurl'] = '/luci/logs/?nodename=' + nodename_resolved + '&clustername=' + clustername
   return infohash
@@ -3333,7 +3336,12 @@
 
     map['currentservices'] = svc_dict_list
     #next is faildoms
-    fdoms = model.getFailoverDomainsForNode(name)
+
+    if model:
+      fdoms = model.getFailoverDomainsForNode(name)
+    else:
+      map['ricci_error'] = True
+      fdoms = list()
     fdom_dict_list = list()
     for fdom in fdoms:
       fdom_dict = {}



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2006-12-06 22:11 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2006-12-06 22:11 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2006-12-06 22:11:20

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 

Log message:
	- Fix an IP corner case for setting/editing services
	- Fix the autostart setting and its display

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.123&r2=1.124
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.174&r2=1.175

--- conga/luci/cluster/form-macros	2006/12/06 21:16:35	1.123
+++ conga/luci/cluster/form-macros	2006/12/06 22:11:20	1.124
@@ -2825,7 +2825,7 @@
 							This service is stopped
 						</tal:block>
 					</div>
-					<p>Autostart is <span tal:condition="not: autostart" tal:replace="string:not" /> enabled for this service</p>
+					<p>Autostart is <span tal:condition="python: autostart.lower() == 'false'" tal:replace="string:not" /> enabled for this service</p>
 				</td>
 			</tr>
 
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2006/12/06 21:16:35	1.174
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2006/12/06 22:11:20	1.175
@@ -468,6 +468,13 @@
 			return (False, {'errors': [ 'An invalid resource type was specified' ]})
 
 		try:
+			if res_type == 'ip':
+				dummy_form['resourceName'] = dummy_form['ip_address']
+		except Exception, e:
+			luci_log.debug_verbose('vSA3a: type is ip but no addr: %s' % str(e))
+			return (False, {'errors': [ 'No IP address was given.' ]})
+
+		try:
 			if dummy_form.has_key('immutable'):
 				newRes = getResource(model, dummy_form['resourceName'])
 				resObj = RefObject(newRes)
@@ -493,7 +500,7 @@
 
 	autostart = "1"
 	try:
-		if not request.form.has_key('autostart'):
+		if request.form['autostart'] == "0":
 			autostart = "0"
 	except:
 		pass



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2006-12-06 21:16 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2006-12-06 21:16 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2006-12-06 21:16:35

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: ClusterNode.py cluster_adapters.py 

Log message:
	Related: #218040

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.122&r2=1.123
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterNode.py.diff?cvsroot=cluster&r1=1.1&r2=1.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.173&r2=1.174

--- conga/luci/cluster/form-macros	2006/12/06 18:38:54	1.122
+++ conga/luci/cluster/form-macros	2006/12/06 21:16:35	1.123
@@ -77,7 +77,7 @@
 
 	<tal:block tal:condition="python: ricci_agent">
 		<tal:block tal:define="
-			global stat python: here.getClusterStatus(ricci_agent);
+			global stat python: here.getClusterStatus(request, ricci_agent);
 			global cstatus python: here.getClustersInfo(stat, request);
 			global cluster_status python: 'cluster ' + (('running' in cstatus and cstatus['running'] == 'true') and 'running' or 'stopped');"
 	 	/>
@@ -122,14 +122,22 @@
 		</td>
 	</tr>
 
+	<tr class="cluster">
+		<td tal:condition="exists: cstatus/error" class="cluster">
+			<span class="errmsgs">
+				An error occurred while attempting to get status information for this cluster. The information shown may be out of date.
+			</span>
+		</td>
+	</tr>
+
 	<tr class="cluster info_middle">
 		<td colspan="2" class="cluster cluster_quorum">
 			<ul class="cluster_quorum"
 				tal:condition="exists: cstatus/status">
 
-				<li><strong class="cluster">Status</strong>: <span tal:replace="cstatus/status"/></li>
-				<li><strong class="cluster">Total Cluster Votes</strong>: <span tal:replace="cstatus/votes"/></li>
-				<li><strong class="cluster">Minimum Required Quorum</strong>: <span tal:replace="cstatus/minquorum"/></li>
+				<li><strong class="cluster">Status</strong>: <span tal:replace="cstatus/status | string:[unknown]"/></li>
+				<li><strong class="cluster">Total Cluster Votes</strong>: <span tal:replace="cstatus/votes | string:[unknown]"/></li>
+				<li><strong class="cluster">Minimum Required Quorum</strong>: <span tal:replace="cstatus/minquorum | string:[unknown]"/></li>
 			</ul>
 		</td>
 	</tr>
@@ -2288,7 +2296,7 @@
 		global ricci_agent ri_agent | python: here.getRicciAgentForCluster(request)" />
 
 	<tal:block tal:define="
-		global nodestatus python: here.getClusterStatus(ricci_agent);
+		global nodestatus python: here.getClusterStatus(request, ricci_agent);
 		global nodeinfo python: here.getNodeInfo(modelb, nodestatus, request);
 		global status_class python: 'node_' + (nodeinfo['nodestate'] == '0' and 'active' or (nodeinfo['nodestate'] == '1' and 'inactive' or 'unknown'));
 		global cluster_node_status_str python: (nodeinfo['nodestate'] == '0' and 'Cluster member' or (nodeinfo['nodestate'] == '1' and 'Currently not a cluster participant' or 'This node is not responding'));
@@ -2531,7 +2539,7 @@
 		global ricci_agent ri_agent | python: here.getRicciAgentForCluster(request)" />
 
 	<tal:block tal:define="
-		global status python: here.getClusterStatus(ricci_agent);
+		global status python: here.getClusterStatus(request, ricci_agent);
 		global nds python: here.getNodesInfo(modelb, status, request)" />
 
 	<div tal:repeat="nd nds">
@@ -2752,7 +2760,7 @@
 		global ricci_agent ri_agent | python: here.getRicciAgentForCluster(request)" />
 
 	<tal:block tal:define="
-		global svcstatus python: here.getClusterStatus(ricci_agent);
+		global svcstatus python: here.getClusterStatus(request, ricci_agent);
 		global svcinf python: here.getServicesInfo(svcstatus,modelb,request);
 		global svcs svcinf/services" />
 
@@ -3027,7 +3035,7 @@
 
 	<tal:block tal:define="
 		global global_resources python: here.getResourcesInfo(modelb, request);
-		global sstat python: here.getClusterStatus(ricci_agent);
+		global sstat python: here.getClusterStatus(request, ricci_agent);
 		global sinfo python: here.getServiceInfo(sstat, modelb, request);
 		global running sinfo/running | nothing;" />
 
@@ -3217,7 +3225,7 @@
 		global ricci_agent ri_agent | python: here.getRicciAgentForCluster(request)" />
 
 	<tal:block tal:define="
-		global sta python: here.getClusterStatus(ricci_agent);
+		global sta python: here.getClusterStatus(request, ricci_agent);
 		global fdominfo python: here.getFdomsInfo(modelb, request, sta);" />
 
 	<div class="cluster fdom" tal:repeat="fdom fdominfo">
--- conga/luci/site/luci/Extensions/ClusterNode.py	2006/05/30 20:17:21	1.1
+++ conga/luci/site/luci/Extensions/ClusterNode.py	2006/12/06 21:16:35	1.2
@@ -96,3 +96,10 @@
     except KeyError, e:
       return ""
 
+  def getVotes(self):
+    try:
+      return self.getAttribute('votes')
+    except KeyError, e:
+      return "1"
+    except:
+      return None
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2006/12/06 18:38:54	1.173
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2006/12/06 21:16:35	1.174
@@ -1835,23 +1835,133 @@
 		return None
 	return getRicciAgent(self, clustername)
 
-def getClusterStatus(self, rc):
+def getClusterStatusModel(model):
+	results = list()
+	vals = {}
+
+	try:
+		clustername = model.getClusterName()
+		clusteralias = model.getClusterAlias()
+		vals['type'] = 'cluster'
+		vals['alias'] = clusteralias
+		vals['name'] = clustername
+		vals['error'] = True
+		vals['votes'] = '[unknown]'
+		vals['quorate'] = '[unknown]'
+		vals['minQuorum'] = '[unknown]'
+		results.append(vals)
+	except Exception, e:
+		luci_log.debug_verbose('GCSM0: %s' % str(e))
+		return None
+
+	try:
+		nodelist = model.getNodes()
+	except Exception, e:
+		luci_log.debug_verbose('GCSM1: %s' % str(e))
+		return None
+
+	for node in nodelist:
+		node_val = {}
+		node_val['type'] = 'node'
+		try:
+			node_name = node.getName()
+			if not node_name:
+				raise Exception, 'cluster node name is unknown'
+		except:
+			node_name = '[unknown]'
+
+		node_val['name'] = node_name
+		node_val['clustered'] = '[unknown]'
+		node_val['online'] = '[unknown]'
+		node_val['error'] = True
+
+		try:
+			votes = node.getVotes()
+			if not votes:
+				raise Exception, 'unknown unmber of votes'
+		except:
+			votes = '[unknown]'
+
+		node_val['votes'] = votes
+		results.append(node_val)
+	return results
+
+def getClusterStatusDB(self, clustername):
+	results = list()
+	vals = {}
+
+	vals['type'] = 'cluster'
+	vals['alias'] = clustername
+	vals['name'] = clustername
+	vals['error'] = True
+	vals['quorate'] = '[unknown]'
+	vals['votes'] = '[unknown]'
+	vals['minQuorum'] = '[unknown]'
+	results.append(vals)
+
+	try:
+		cluster_path = '%s/luci/systems/cluster/%s' % (CLUSTER_FOLDER_PATH, clustername)
+		nodelist = self.restrictedTraverse(cluster_path).objectItems('Folder')
+	except Exception, e:
+		luci_log.debug_verbose('GCSDB0: %s: %s' % (clustername, str(e)))
+		return results
+
+	for node in nodelist:
+		try:
+			node_val = {}
+			node_val['type'] = 'node'
+			node_val['name'] = node[0]
+			node_val['clustered'] = '[unknown]'
+			node_val['online'] = '[unknown]'
+			node_val['error'] = True
+			results.append(node_val)
+		except Exception, e:
+			luci_log.debug_verbose('GCSDB1: %s' % str(e))
+	return results
+
+def getClusterStatus(self, request, rc):
 	try:
 		doc = getClusterStatusBatch(rc)
+		if not doc:
+			raise Exception, 'doc is None'
 	except Exception, e:
 		luci_log.debug_verbose('GCS0: error: %s' % str(e))
 		doc = None
 
+	if doc is None:
+		try:
+			model = request.SESSION.get('model')
+			cinfo = getClusterStatusModel(model)
+			if not cinfo or len(cinfo) < 1:
+				raise Exception, 'cinfo is None'
+			return cinfo
+		except Exception, e:
+			luci_log.debug_verbose('GCS1: %s' % str(e))
+			doc = None
+
 	if not doc:
 		try:
-			luci_log.debug_verbose('GCS1: returned None for %s/%s' % rc.cluster_info())
-		except:
-			pass
+			clustername = None
+			try:
+				clustername = request['clustername']
+			except:
+				try:
+					clustername = request.form['clustername']
+				except:
+					pass
 
-		return {}
+			if not clustername:
+				raise Exception, 'unable to determine cluster name'
 
-	results = list()
+			cinfo = getClusterStatusDB(self, clustername)
+			if not cinfo or len(cinfo) < 1:
+				raise Exception, 'cinfo is None'
+			return cinfo
+		except Exception, e:
+			luci_log.debug_verbose('GCS1a: unable to get cluster info from DB: %s' % str(e))
+		return []
 
+	results = list()
 	vals = {}
 	vals['type'] = "cluster"
 
@@ -2315,39 +2425,31 @@
         return {}
 
   if model is None:
-    rc = getRicciAgent(self, cluname)
-    if not rc:
-      luci_log.debug_verbose('GCI1: unable to find a ricci agent for the %s cluster' % cluname)
-      return {}
     try:
-      model = getModelBuilder(None, rc, rc.dom0())
+      model = getModelForCluster(self, cluname)
       if not model:
         raise Exception, 'model is none'
-
-      try:
-        req.SESSION.set('model', model)
-      except Exception, e2:
-        luci_log.debug_verbose('GCI2 unable to set model in session: %s' % str(e2))
+      req.SESSION.set('model', model)
     except Exception, e:
-      luci_log.debug_verbose('GCI3: unable to get model for cluster %s: %s' % (cluname, str(e)))
+      luci_log.debug_verbose('GCI1: unable to get model for cluster %s: %s' % (cluname, str(e)))
       return {}
 
   prop_baseurl = req['URL'] + '?' + PAGETYPE + '=' + CLUSTER_CONFIG + '&' + CLUNAME + '=' + cluname + '&'
-  map = {}
+  clumap = {}
   basecluster_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_GENERAL_TAB
   #needed:
-  map['basecluster_url'] = basecluster_url
+  clumap['basecluster_url'] = basecluster_url
   #name field
-  map['clustername'] = model.getClusterAlias()
+  clumap['clustername'] = model.getClusterAlias()
   #config version
   cp = model.getClusterPtr()
-  map['config_version'] = cp.getConfigVersion()
+  clumap['config_version'] = cp.getConfigVersion()
   #-------------
   #new cluster params - if rhel5
   #-------------
   #Fence Daemon Props
   fencedaemon_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_FENCE_TAB
-  map['fencedaemon_url'] = fencedaemon_url
+  clumap['fencedaemon_url'] = fencedaemon_url
   fdp = model.getFenceDaemonPtr()
   pjd = fdp.getAttribute('post_join_delay')
   if pjd is None:
@@ -2356,35 +2458,35 @@
   if pfd is None:
     pfd = "0"
   #post join delay
-  map['pjd'] = pjd
+  clumap['pjd'] = pjd
   #post fail delay
-  map['pfd'] = pfd
+  clumap['pfd'] = pfd
   #-------------
   #if multicast
   multicast_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_MCAST_TAB
-  map['multicast_url'] = multicast_url
+  clumap['multicast_url'] = multicast_url
   #mcast addr
   is_mcast = model.isMulticast()
-  #map['is_mcast'] = is_mcast
+  #clumap['is_mcast'] = is_mcast
   if is_mcast:
-    map['mcast_addr'] = model.getMcastAddr()
-    map['is_mcast'] = "True"
+    clumap['mcast_addr'] = model.getMcastAddr()
+    clumap['is_mcast'] = "True"
   else:
-    map['is_mcast'] = "False"
-    map['mcast_addr'] = "1.2.3.4"
+    clumap['is_mcast'] = "False"
+    clumap['mcast_addr'] = "1.2.3.4"
 
   #-------------
   #quorum disk params
   quorumd_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_QDISK_TAB
-  map['quorumd_url'] = quorumd_url
+  clumap['quorumd_url'] = quorumd_url
   is_quorumd = model.isQuorumd()
-  map['is_quorumd'] = is_quorumd
-  map['interval'] = ""
-  map['tko'] = ""
-  map['votes'] = ""
-  map['min_score'] = ""
-  map['device'] = ""
-  map['label'] = ""
+  clumap['is_quorumd'] = is_quorumd
+  clumap['interval'] = ""
+  clumap['tko'] = ""
+  clumap['votes'] = ""
+  clumap['min_score'] = ""
+  clumap['device'] = ""
+  clumap['label'] = ""
 
   #list struct for heuristics...
   hlist = list()
@@ -2393,27 +2495,27 @@
     qdp = model.getQuorumdPtr()
     interval = qdp.getAttribute('interval')
     if interval is not None:
-      map['interval'] = interval
+      clumap['interval'] = interval
 
     tko = qdp.getAttribute('tko')
     if tko is not None:
-      map['tko'] = tko
+      clumap['tko'] = tko
 
     votes = qdp.getAttribute('votes')
     if votes is not None:
-      map['votes'] = votes
+      clumap['votes'] = votes
 
     min_score = qdp.getAttribute('min_score')
     if min_score is not None:
-      map['min_score'] = min_score
+      clumap['min_score'] = min_score
 
     device = qdp.getAttribute('device')
     if device is not None:
-      map['device'] = device
+      clumap['device'] = device
 
     label = qdp.getAttribute('label')
     if label is not None:
-      map['label'] = label
+      clumap['label'] = label
 
     heuristic_kids = qdp.getChildren()
     h_ctr = 0
@@ -2442,9 +2544,9 @@
       else:
         hmap['hinterval'] = ""
       hlist.append(hmap)
-  map['hlist'] = hlist
+  clumap['hlist'] = hlist
 
-  return map
+  return clumap
 
 def getClustersInfo(self, status, req):
   map = {}
@@ -2464,6 +2566,10 @@
   if len(clulist) < 1:
     return {}
   clu = clulist[0]
+  cluerror = False
+  if 'error' in clu:
+    cluerror = True
+    map['error'] = True
   clustername = clu['name']
   if clu['alias'] != "":
     map['clusteralias'] = clu['alias']
@@ -2478,6 +2584,7 @@
     map['running'] = "false"
   map['votes'] = clu['votes']
   map['minquorum'] = clu['minQuorum']
+
   map['clucfg'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_CONFIG + "&" + CLUNAME + "=" + clustername
 
   map['restart_url'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_PROCESS + "&" + CLUNAME + "=" + clustername + '&task=' + CLUSTER_RESTART
@@ -2499,6 +2606,7 @@
   map['currentservices'] = svc_dict_list
   node_dict_list = list()
   for item in nodelist:
+    node_error = 'error' in item
     nmap = {}
     name = item['name']
     nmap['nodename'] = name
@@ -3001,15 +3109,16 @@
 
   infohash['currentservices'] = svc_dict_list
 
-  #next is faildoms
-  fdoms = model.getFailoverDomainsForNode(nodename)
   fdom_dict_list = list()
-  for fdom in fdoms:
-    fdom_dict = {}
-    fdom_dict['name'] = fdom.getName()
-    fdomurl = baseurl + "?" + PAGETYPE + "=" + FDOM_CONFIG + "&" + CLUNAME + "=" + clustername + "&fdomname=" + fdom.getName()
-    fdom_dict['fdomurl'] = fdomurl
-    fdom_dict_list.append(fdom_dict)
+  if model:
+    #next is faildoms
+    fdoms = model.getFailoverDomainsForNode(nodename)
+    for fdom in fdoms:
+      fdom_dict = {}
+      fdom_dict['name'] = fdom.getName()
+      fdomurl = baseurl + "?" + PAGETYPE + "=" + FDOM_CONFIG + "&" + CLUNAME + "=" + clustername + "&fdomname=" + fdom.getName()
+      fdom_dict['fdomurl'] = fdomurl
+      fdom_dict_list.append(fdom_dict)
 
   infohash['fdoms'] = fdom_dict_list
 
@@ -3040,7 +3149,6 @@
 
   infohash['logurl'] = '/luci/logs/?nodename=' + nodename_resolved + '&clustername=' + clustername
   return infohash
-  #get list of faildoms for node
 
 def getNodesInfo(self, model, status, req):
   resultlist = list()
@@ -3144,6 +3252,10 @@
   return resultlist
 
 def getFence(self, model, request):
+  if not model:
+    luci_log.debug_verbose('getFence0: model is None')
+    return {}
+
   map = {}
   fencename = request['fencename']
   fencedevs = model.getFenceDevices()
@@ -3190,6 +3302,10 @@
   raise
   
 def getFenceInfo(self, model, request):
+  if not model:
+    luci_log.debug_verbose('getFenceInfo00: model is None')
+    return {}
+
   try:
     clustername = request['clustername']
   except:
@@ -3440,9 +3556,14 @@
   return map    
       
 def getFencesInfo(self, model, request):
+  map = {}
+  if not model:
+    luci_log.debug_verbose('getFencesInfo0: model is None')
+    map['fencedevs'] = list()
+    return map
+
   clustername = request['clustername']
   baseurl = request['URL']
-  map = {}
   fencedevs = list() #This is for the fencedev list page
 
   #Get list of fence devices



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2006-11-13 21:40 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2006-11-13 21:40 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2006-11-13 21:40:55

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 
	                           conga_constants.py ricci_bridge.py 

Log message:
	fix for bz# 215034 (Cannot change daemon properties via luci web app)

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.104&r2=1.105
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.162&r2=1.163
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/conga_constants.py.diff?cvsroot=cluster&r1=1.25&r2=1.26
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ricci_bridge.py.diff?cvsroot=cluster&r1=1.42&r2=1.43

--- conga/luci/cluster/form-macros	2006/11/12 02:10:52	1.104
+++ conga/luci/cluster/form-macros	2006/11/13 21:40:55	1.105
@@ -1791,7 +1791,7 @@
 		tal:condition="python: nodeinfo['nodestate'] == '0' or nodeinfo['nodestate'] == '1'">
 
 	<h3>Cluster daemons running on this node</h3>
-	<form name="daemon_form">
+	<form name="daemon_form" method="post">
 	<table class="systemsTable">
 		<thead>
 			<tr class="systemsTable">
@@ -1803,23 +1803,38 @@
 		<tfoot class="systemsTable">
 			<tr class="systemsTable"><td class="systemsTable" colspan="3">
 				<div class="systemsTableEnd">
-					<input type="button" value="Update node daemon properties" />
+					<input type="Submit" value="Update node daemon properties" />
 				</div>
 			</td></tr>
 		</tfoot>
 		<tbody class="systemsTable">
 			<tr class="systemsTable" tal:repeat="demon nodeinfo/d_states">
 				<td class="systemsTable"><span tal:replace="demon/name"/></td>
-				<td class="systemsTable"><span tal:replace="python: demon['running'] and 'yes' or 'no'" /></td>
+				<td class="systemsTable"><span tal:replace="python: demon['running'] == 'true' and 'yes' or 'no'" /></td>
 				<td class="systemsTable">
-					<input type="checkbox"
-						tal:attributes="
-							name python: nodeinfo['nodename'] + ':' + demon['name'];
-							checked python: demon['enabled'] and 'checked'" />
+					<input type="hidden" tal:attributes="
+						name python: '__daemon__:' + demon['name'] + ':';
+						value demon/name" />
+
+					<input type="hidden" tal:attributes="
+						name python: '__daemon__:' + demon['name'] + ':';
+						value python: demon['enabled'] == 'true' and '1' or '0'" />
+
+					<input type="checkbox" tal:attributes="
+						name python: '__daemon__:' + demon['name'] + ':';
+						checked python: demon['enabled'] == 'true' and 'checked'" />
 				</td>
 			</tr>
 		</tbody>
 	</table>
+
+	<input type="hidden" name="nodename"
+		tal:attributes="value nodeinfo/nodename | request/nodename | nothing" />
+
+	<input type="hidden" name="clustername"
+		tal:attributes="value request/clustername | nothing" />
+
+	<input type="hidden" name="pagetype" value="55" />
 	</form>
 	<hr/>
 	</tal:block>
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2006/11/12 02:10:53	1.162
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2006/11/13 21:40:55	1.163
@@ -815,7 +815,7 @@
 		except Exception, e:
 			luci_log.debug_verbose('VCC4: export model as string failed: %s' \
 				% str(e))
-			errors.append('unable to store the new cluster configuration')
+			errors.append('Unable to store the new cluster configuration')
 
 	try:
 		clustername = model.getClusterName()
@@ -823,7 +823,7 @@
 			raise Exception, 'cluster name from modelb.getClusterName() is blank'
 	except Exception, e:
 		luci_log.debug_verbose('VCC5: error: getClusterName: %s' % str(e))
-		errors.append('unable to determine cluster name from model') 
+		errors.append('Unable to determine cluster name from model') 
 
 	if len(errors) > 0:
 		return (retcode, {'errors': errors, 'messages': messages})
@@ -832,14 +832,14 @@
 		rc = getRicciAgent(self, clustername)
 		if not rc:
 			luci_log.debug_verbose('VCC6: unable to find a ricci agent for the %s cluster' % clustername)
-			errors.append('unable to contact a ricci agent for cluster %s' \
+			errors.append('Unable to contact a ricci agent for cluster %s' \
 				% clustername)
 
 	if rc:
 		batch_id, result = setClusterConf(rc, str(conf_str))
 		if batch_id is None or result is None:
 			luci_log.debug_verbose('VCC7: setCluserConf: batchid or result is None')
-			errors.append('unable to propagate the new cluster configuration for %s' \
+			errors.append('Unable to propagate the new cluster configuration for %s' \
 				% clustername)
 		else:
 			try:
@@ -862,6 +862,89 @@
 def validateFenceEdit(self, request):
 	return (True, {})
 
+def validateDaemonProperties(self, request):
+	errors = list()
+
+	form = None
+	try:
+		response = request.response
+		form = request.form
+		if not form:
+			form = None
+			raise Exception, 'no form was submitted'
+	except:
+		pass
+
+	if form is None:
+		luci_log.debug_verbose('VDP0: no form was submitted')
+		return (False, {'errors': ['No form was submitted']})
+
+	try:
+		nodename = form['nodename'].strip()
+		if not nodename:
+			raise Exception, 'nodename is blank'
+	except Exception, e:
+		errors.append('Unable to determine the current node name')
+		luci_log.debug_verbose('VDP1: no nodename: %s' % str(e))
+
+	try:
+		clustername = form['clustername'].strip()
+		if not clustername:
+			raise Exception, 'clustername is blank'
+	except Exception, e:
+		errors.append('Unable to determine the current cluster name')
+		luci_log.debug_verbose('VDP2: no clustername: %s' % str(e))
+
+	disable_list = list()
+	enable_list = list()
+	for i in form.items():
+		try:
+			if i[0][:11] == '__daemon__:':
+				daemon_prop = i[1]
+				if len(daemon_prop) == 2:
+					if daemon_prop[1] == '1':
+						disable_list.append(daemon_prop[0])
+				else:
+					if daemon_prop[1] == '0' and daemon_prop[2] == 'on':
+						enable_list.append(daemon_prop[0])
+		except Exception, e:
+			luci_log.debug_verbose('VDP3: error: %s' % str(i))
+
+	if len(enable_list) < 1 and len(disable_list) < 1:
+		luci_log.debug_verbose('VDP4: no changes made')
+		response.redirect(request['URL'] + "?pagetype=" + NODE + "&clustername=" + clustername + '&nodename=' + nodename)
+
+	nodename_resolved = resolve_nodename(self, clustername, nodename)
+	try:
+		rc = RicciCommunicator(nodename_resolved)
+		if not rc:
+			raise Exception, 'rc is None'
+	except Exception, e:
+		luci_log.debug_verbose('VDP5: RC %s: %s' % (nodename_resolved, str(e)))
+		errors.append('Unable to connect to the ricci agent on %s to update cluster daemon properties' % nodename_resolved)
+		return (False, {'errors': errors})
+		
+	batch_id, result = updateServices(rc, enable_list, disable_list)
+	if batch_id is None or result is None:
+		luci_log.debug_verbose('VDP6: setCluserConf: batchid or result is None')
+		errors.append('Unable to update the cluster daemon properties on node %s' % nodename_resolved)
+		return (False, {'errors': errors})
+
+	try:
+		status_msg = 'Updating %s daemon properties:' % nodename_resolved
+		if len(enable_list) > 0:
+			status_msg += ' enabling %s' % str(enable_list)[1:-1]
+		if len(disable_list) > 0:
+			status_msg += ' disabling %s' % str(disable_list)[1:-1]
+		set_node_flag(self, clustername, rc.hostname(), batch_id, CLUSTER_DAEMON, status_msg)
+	except:
+		pass
+
+	if len(errors) > 0:
+		return (False, {'errors': errors})
+
+	response.redirect(request['URL'] + "?pagetype=" + NODE + "&clustername=" + clustername + '&nodename=' + nodename + '&busyfirst=true')
+
 formValidators = {
 	6: validateCreateCluster,
 	7: validateConfigCluster,
@@ -872,11 +955,18 @@
 	33: validateResourceAdd,
 	51: validateFenceAdd,
 	50: validateFenceEdit,
+	55: validateDaemonProperties
 }
 
 def validatePost(self, request):
-	pagetype = int(request.form['pagetype'])
+	try:
+		pagetype = int(request.form['pagetype'])
+	except Exception, e:
+		luci_log.debug_verbose('VP0: error: %s' % str(e))
+		return None
+
 	if not pagetype in formValidators:
+		luci_log.debug_verbose('VP1: no handler for page type %d' % pagetype)
 		return None
 	else:
 		return formValidators[pagetype](self, request)
--- conga/luci/site/luci/Extensions/conga_constants.py	2006/11/12 02:10:53	1.25
+++ conga/luci/site/luci/Extensions/conga_constants.py	2006/11/13 21:40:55	1.26
@@ -42,6 +42,7 @@
 FENCEDEV_LIST="52"
 FENCEDEV_CONFIG="53"
 FENCEDEV="54"
+CLUSTER_DAEMON="55"
 
 #Cluster tasks
 CLUSTER_STOP = '1000'
--- conga/luci/site/luci/Extensions/ricci_bridge.py	2006/11/12 02:10:53	1.42
+++ conga/luci/site/luci/Extensions/ricci_bridge.py	2006/11/13 21:40:55	1.43
@@ -18,7 +18,7 @@
 		return False
 
 	try:
-		batchid = batch.getAttribute('batch_id')
+		dummy = batch.getAttribute('batch_id')
 		result = batch.getAttribute('status')
 	except:
 		return False
@@ -471,6 +471,26 @@
 	ricci_xml = rc.batch_run(batch_str)
 	return batchAttemptResult(ricci_xml)
 
+def updateServices(rc, enable_list, disable_list):
+	batch = ''
+
+	if enable_list and len(enable_list) > 0:
+		batch += '<module name="service"><request API_version="1.0"><function_call name="enable"><var mutable="false" name="services" type="list_xml">'
+		for i in enable_list:
+			batch += '<service name="%s"/>' % str(i)
+		batch += '</var></function_call></request></module>'
+
+	if disable_list and len(disable_list) > 0:
+		batch += '<module name="service"><request API_version="1.0"><function_call name="disable"><var mutable="false" name="services" type="list_xml">'
+		for i in disable_list:
+			batch += '<service name="%s"/>' % str(i)
+		batch += '</var></function_call></request></module>'
+
+	if batch == '':
+		return None
+	ricci_xml = rc.batch_run(batch)
+	return batchAttemptResult(ricci_xml)
+	
 def restartService(rc, servicename):
 	batch_str = '<module name="cluster"><request API_version="1.0"><function_call name="restart_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/></function_call></request></module>'
 



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2006-11-12  2:10 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2006-11-12  2:10 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2006-11-12 02:10:53

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: LuciSyslog.py cluster_adapters.py 
	                           conga_constants.py ricci_bridge.py 
	                           ricci_communicator.py 

Log message:
	fix for bz# 213266

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.103&r2=1.104
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/LuciSyslog.py.diff?cvsroot=cluster&r1=1.9&r2=1.10
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.161&r2=1.162
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/conga_constants.py.diff?cvsroot=cluster&r1=1.24&r2=1.25
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ricci_bridge.py.diff?cvsroot=cluster&r1=1.41&r2=1.42
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ricci_communicator.py.diff?cvsroot=cluster&r1=1.18&r2=1.19

--- conga/luci/cluster/form-macros	2006/11/10 19:44:57	1.103
+++ conga/luci/cluster/form-macros	2006/11/12 02:10:52	1.104
@@ -25,26 +25,33 @@
       </span>
       <span tal:condition="python: 'isnodecreation' in nodereport and nodereport['isnodecreation'] == True">
        <span tal:condition="python: nodereport['iserror'] == True">
-			  <h2><span tal:content="nodereport/desc" /></h2>
-         <font color="red"><span tal:content="nodereport/errormessage"/></font>
+		<h2><span tal:content="nodereport/desc" /></h2>
+		<span class="errmsg" tal:content="nodereport/errormessage"/>
        </span>
+
        <span tal:condition="python: nodereport['iserror'] == False">
-			  <h2><span tal:content="nodereport/desc" /></h2>
-         <i><span tal:content="nodereport/statusmessage"/></i><br/>
-          <span tal:condition="python: nodereport['statusindex'] == 0">
+		<h2><span tal:content="nodereport/desc" /></h2>
+		<em tal:content="nodereport/statusmessage | nothing"/><br/>
+          <span tal:condition="python: nodereport['statusindex'] < 1">
            <img src="notstarted.png"/>
           </span>
-          <span tal:condition="python: nodereport['statusindex'] == 1">
-           <img src="installed.png"/>
-          </span>
-          <span tal:condition="python: nodereport['statusindex'] == 2">
-           <img src="rebooted.png"/>
+
+          <span tal:condition="
+			python: nodereport['statusindex'] == 1 or nodereport['statusindex'] == 2">
+           <img src="installed.png" alt="[cluster software installed]" />
           </span>
+
           <span tal:condition="python: nodereport['statusindex'] == 3">
-           <img src="configured.png"/>
+           <img src="rebooted.png" alt="[cluster node rebooted]" />
+          </span>
+
+          <span tal:condition="
+				python: nodereport['statusindex'] == 4 or nodereport['statusindex'] == 5">
+           <img src="configured.png" alt="[cluster node configured]" />
           </span>
-          <span tal:condition="python: nodereport['statusindex'] == 4">
-           <img src="joined.png"/>
+
+          <span tal:condition="python: nodereport['statusindex'] == 6">
+           <img src="joined.png" alt="[cluster node joined cluster]" />
           </span>
        </span>
       </span>
@@ -378,6 +385,7 @@
 	<tal:block
 		tal:define="global clusterinfo python: here.getClusterInfo(modelb, request)" />
 
+<tal:block tal:condition="clusterinfo">
 	<span tal:omit-tag="" tal:define="global configTabNum python: 'tab' in request and int(request['tab']) or 1" />
 
 	<ul class="configTab">
@@ -439,7 +447,7 @@
 					<td class="systemsTable">Cluster Name</td>
 					<td class="systemsTable">
 						<input type="text" name="cluname"
-							tal:attributes="value clusterinfo/clustername"/>
+							tal:attributes="value clusterinfo/clustername" />
 					</td>
 				</tr>
 				<tr class="systemsTable">
@@ -1082,6 +1090,7 @@
 		</script>
 		</form>
 	</div>
+</tal:block>
 </div>
 
 <div metal:define-macro="clusterprocess-form">
@@ -2117,7 +2126,10 @@
 <div metal:define-macro="nodeprocess-form">
 	<tal:block
 		tal:define="result python: here.nodeTaskProcess(modelb, request)"/>
-	<h2>Node Process Form</h2>
+
+	<div>
+		<span tal:replace="result | nothing" />
+	</div>
 </div>
 
 <div metal:define-macro="services-form">
--- conga/luci/site/luci/Extensions/LuciSyslog.py	2006/11/06 20:21:04	1.9
+++ conga/luci/site/luci/Extensions/LuciSyslog.py	2006/11/12 02:10:53	1.10
@@ -3,14 +3,12 @@
 		LOG_DAEMON, LOG_PID, LOG_NDELAY, LOG_INFO, \
 		LOG_WARNING, LOG_AUTH, LOG_DEBUG
 
-"""Exception class for the LuciSyslog facility
-"""
+# Exception class for the LuciSyslog facility
 class LuciSyslogError(Exception):
 	def __init__(self, msg):
 		Exception.__init__(self, msg)
 
-"""Facility that provides centralized syslog(3) functionality for luci
-"""
+# Facility that provides centralized syslog(3) functionality for luci
 class LuciSyslog:
 	def __init__(self):
 		self.__init = 0
@@ -50,11 +48,24 @@
 	def debug_verbose(self, msg):
 		if not LUCI_DEBUG_MODE or LUCI_DEBUG_VERBOSITY < 2 or not self.__init:
 			return
-		try:
-			syslog(LOG_DEBUG, msg)
-		except:
-			pass
-			#raise LuciSyslogError, 'syslog debug call failed'
+
+		msg_len = len(msg)
+		if msg_len < 1:
+			return
+
+		while True:
+			cur_len = min(msg_len, 800)
+			cur_msg = msg[:cur_len]
+			try:
+				syslog(LOG_DEBUG, cur_msg)
+			except:
+				pass
+
+			msg_len -= cur_len
+			if msg_len > 0:
+				msg = msg[cur_len:]
+			else:
+				break
 
 	def debug(self, msg):
 		if not LUCI_DEBUG_MODE or not self.__init:
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2006/11/10 19:44:57	1.161
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2006/11/12 02:10:53	1.162
@@ -339,7 +339,8 @@
 	while i < len(nodeList):
 		clunode = nodeList[i]
 		try:
-			batchNode = addClusterNodeBatch(clusterName,
+			batchNode = addClusterNodeBatch(clunode['os'],
+							clusterName,
 							True,
 							True,
 							enable_storage,
@@ -370,8 +371,8 @@
 		success = True
 		try:
 			rc = RicciCommunicator(clunode['host'])
-		except:
-			luci_log.info('Unable to connect to the ricci daemon on host ' + clunode['host'])
+		except Exception, e:
+			luci_log.info('Unable to connect to the ricci daemon on host %s: %s'% (clunode['host'], str(e)))
 			success = False
 
 		if success:
@@ -995,6 +996,9 @@
 def createCluConfigTree(self, request, model):
   dummynode = {}
 
+  if not model:
+    return {}
+
   #There should be a positive page type
   try:
     pagetype = request[PAGETYPE]
@@ -1418,6 +1422,8 @@
   return model.getClusterName()
 
 def getClusterAlias(self, model):
+  if not model:
+    return ''
   alias = model.getClusterAlias()
   if alias is None:
     return model.getClusterName()
@@ -1539,7 +1545,21 @@
 		except Exception, e:
 			luci_log.debug('GRA4: cluster_info error: %s' % str(e))
 
-		if cluname != lower(clu_info[0]) and cluname != lower(clu_info[1]):
+		try:
+			cur_name = str(clu_info[0]).strip().lower()
+			if not cur_name:
+				raise
+		except:
+			cur_name = None
+
+		try:
+			cur_alias = str(clu_info[1]).strip().lower()
+			if not cur_alias:
+				raise
+		except:
+			cur_alias = None
+			
+		if (cur_name is not None and cluname != cur_name) and (cur_alias is not None and cluname != cur_alias):
 			try:
 				luci_log.debug('GRA5: %s reports it\'s in cluster %s:%s; we expect %s' \
 					 % (hostname, clu_info[0], clu_info[1], cluname))
@@ -1580,12 +1600,18 @@
 	return getRicciAgent(self, clustername)
 
 def getClusterStatus(self, rc):
-	doc = getClusterStatusBatch(rc)
+	try:
+		doc = getClusterStatusBatch(rc)
+	except Exception, e:
+		luci_log.debug_verbose('GCS0: error: %s' % str(e))
+		doc = None
+
 	if not doc:
 		try:
-			luci_log.debug_verbose('getClusterStatusBatch returned None for %s/%s' % rc.cluster_info())
+			luci_log.debug_verbose('GCS1: returned None for %s/%s' % rc.cluster_info())
 		except:
 			pass
+
 		return {}
 
 	results = list()
@@ -2031,7 +2057,7 @@
 
 	response = request.RESPONSE
 	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
-		 % (request['URL'], NODES, model.getClusterName()))
+		% (request['URL'], NODES, model.getClusterName()))
 
 def getClusterInfo(self, model, req):
   try:
@@ -2061,7 +2087,7 @@
       except Exception, e2:
         luci_log.debug_verbose('GCI2 unable to set model in session: %s' % str(e2))
     except Exception, e:
-      luci_log.debug_verbose('GCI3: unable to get model for cluster %s: %s' % cluname, str(e))
+      luci_log.debug_verbose('GCI3: unable to get model for cluster %s: %s' % (cluname, str(e)))
       return {}
 
   prop_baseurl = req['URL'] + '?' + PAGETYPE + '=' + CLUSTER_CONFIG + '&' + CLUNAME + '=' + cluname + '&'
@@ -2639,34 +2665,34 @@
 			return None
 
 		response = request.RESPONSE
-		response.redirect(request['URL'] + "?pagetype=" + NODE_LIST + "&clustername=" + clustername + '&busyfirst=true')
+		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
 	elif task == NODE_JOIN_CLUSTER:
 		if nodeJoin(self, rc, clustername, nodename_resolved) is None:
 			luci_log.debug_verbose('NTP: nodeJoin failed')
 			return None
 
 		response = request.RESPONSE
-		response.redirect(request['URL'] + "?pagetype=" + NODE_LIST + "&clustername=" + clustername + '&busyfirst=true')
+		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
 	elif task == NODE_REBOOT:
 		if forceNodeReboot(self, rc, clustername, nodename_resolved) is None:
 			luci_log.debug_verbose('NTP: nodeReboot failed')
 			return None
 
 		response = request.RESPONSE
-		response.redirect(request['URL'] + "?pagetype=" + NODE_LIST + "&clustername=" + clustername + '&busyfirst=true')
+		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
 	elif task == NODE_FENCE:
 		if forceNodeFence(self, clustername, nodename, nodename_resolved) is None:
 			luci_log.debug_verbose('NTP: nodeFencefailed')
 			return None
 
 		response = request.RESPONSE
-		response.redirect(request['URL'] + "?pagetype=" + NODE_LIST + "&clustername=" + clustername + '&busyfirst=true')
+		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
 	elif task == NODE_DELETE:
 		if nodeDelete(self, rc, model, clustername, nodename, nodename_resolved) is None:
 			luci_log.debug_verbose('NTP: nodeDelete failed')
 			return None
 		response = request.RESPONSE
-		response.redirect(request['URL'] + "?pagetype=" + NODE_LIST + "&clustername=" + clustername + '&busyfirst=true')
+		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
 
 def getNodeInfo(self, model, status, request):
   infohash = {}
@@ -3396,7 +3422,7 @@
           luci_log.debug_verbose('ICB6b: rc is none')
       except Exception, e:
         rc = None
-        luci_log.debug_verbose('ICB7: ricci returned error in iCB for %s: %s' \
+        luci_log.debug_verbose('ICB7: RC: %s: %s' \
           % (cluname, str(e)))
 
       batch_id = None
@@ -3410,7 +3436,8 @@
             luci_log.debug_verbose('ICB8B: failed to get batch_id from %s: %s' \
                 % (item[0], str(e)))
           except:
-            luci_log.debug_verbose('ICB8C: failed to get batch_id from %s' % item[0])
+            luci_log.debug_verbose('ICB8C: failed to get batch_id from %s' \
+              % item[0])
 
         if batch_id is not None:
           try:
@@ -3458,18 +3485,31 @@
           elif laststatus == 0:
             node_report['statusindex'] = 0
             node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_INSTALL
+          elif laststatus == DISABLE_SVC_TASK:
+            node_report['statusindex'] = DISABLE_SVC_TASK
+            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_CFG
           elif laststatus == REBOOT_TASK:
             node_report['statusindex'] = REBOOT_TASK
             node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_CFG
           elif laststatus == SEND_CONF:
             node_report['statusindex'] = SEND_CONF
             node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_JOIN
+          elif laststatus == ENABLE_SVC_TASK:
+            node_report['statusindex'] = ENABLE_SVC_TASK
+            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_JOIN
+          else:
+            node_report['statusindex'] = 0
+            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + ' Install is in an unknown state.'
           nodereports.append(node_report)
           continue
         elif creation_status == -(INSTALL_TASK):
           node_report['iserror'] = True
           (err_code, err_msg) = extract_module_status(batch_xml, INSTALL_TASK)
           node_report['errormessage'] = CLUNODE_CREATE_ERRORS[INSTALL_TASK] + err_msg
+        elif creation_status == -(DISABLE_SVC_TASK):
+          node_report['iserror'] = True
+          (err_code, err_msg) = extract_module_status(batch_xml, DISABLE_SVC_TASK)
+          node_report['errormessage'] = CLUNODE_CREATE_ERRORS[DISABLE_SVC_TASK] + err_msg
         elif creation_status == -(REBOOT_TASK):
           node_report['iserror'] = True
           (err_code, err_msg) = extract_module_status(batch_xml, REBOOT_TASK)
@@ -3478,6 +3518,10 @@
           node_report['iserror'] = True
           (err_code, err_msg) = extract_module_status(batch_xml, SEND_CONF)
           node_report['errormessage'] = CLUNODE_CREATE_ERRORS[SEND_CONF] + err_msg
+        elif creation_status == -(ENABLE_SVC_TASK):
+          node_report['iserror'] = True
+          (err_code, err_msg) = extract_module_status(batch_xml, DISABLE_SVC_TASK)
+          node_report['errormessage'] = CLUNODE_CREATE_ERRORS[ENABLE_SVC_TASK] + err_msg
         elif creation_status == -(START_NODE):
           node_report['iserror'] = True
           (err_code, err_msg) = extract_module_status(batch_xml, START_NODE)
@@ -3485,7 +3529,13 @@
         else:
           node_report['iserror'] = True
           node_report['errormessage'] = CLUNODE_CREATE_ERRORS[0]
-        clusterfolder.manage_delObjects(item[0])
+
+        try:
+          clusterfolder.manage_delObjects(item[0])
+        except Exception, e:
+          luci_log.debug_verbose('ICB14: delObjects: %s: %s' \
+            % (item[0], str(e)))
+
         nodereports.append(node_report)
         continue
       else:  #either batch completed successfully, or still running
@@ -3497,7 +3547,7 @@
           try:
               clusterfolder.manage_delObjects(item[0])
           except Exception, e:
-              luci_log.info('ICB14: Unable to delete %s: %s' % (item[0], str(e)))
+              luci_log.info('ICB15: Unable to delete %s: %s' % (item[0], str(e)))
           continue
         else:
           map['busy'] = "true"
@@ -3507,8 +3557,12 @@
           nodereports.append(node_report)
           propslist = list()
           propslist.append(LAST_STATUS)
-          item[1].manage_delProperties(propslist)
-          item[1].manage_addProperty(LAST_STATUS, creation_status, "int")
+          try:
+            item[1].manage_delProperties(propslist)
+            item[1].manage_addProperty(LAST_STATUS, creation_status, "int")
+          except Exception, e:
+            luci_log.debug_verbose('ICB16: last_status err: %s %d: %s' \
+              % (item[0], creation_status, str(e)))
           continue
           
     else:
@@ -3548,6 +3602,7 @@
   if isBusy:
     part1 = req['ACTUAL_URL']
     part2 = req['QUERY_STRING']
+
     dex = part2.find("&busyfirst")
     if dex != (-1):
       tmpstr = part2[:dex] #This strips off busyfirst var
@@ -3555,7 +3610,6 @@
       ###FIXME - The above assumes that the 'busyfirst' query var is@the
       ###end of the URL...
     wholeurl = part1 + "?" + part2
-    #map['url'] = "5, url=" + req['ACTUAL_URL'] + "?" + req['QUERY_STRING']
     map['refreshurl'] = "5; url=" + wholeurl
     req['specialpagetype'] = "1"
   else:
@@ -3564,7 +3618,6 @@
       map['refreshurl'] = '5; url=' + req['ACTUAL_URL'] + '?' + query
     except:
       map['refreshurl'] = '5; url=/luci/cluster?pagetype=3'
-  luci_log.debug_verbose('ICB17: refreshurl is \"%s\"' % map['refreshurl'])
   return map
 
 def getClusterOS(self, rc):
--- conga/luci/site/luci/Extensions/conga_constants.py	2006/11/09 20:32:02	1.24
+++ conga/luci/site/luci/Extensions/conga_constants.py	2006/11/12 02:10:53	1.25
@@ -91,26 +91,36 @@
 NODE_UNKNOWN_STR="Unknown State"
 
 #cluster/node create batch task index
-INSTALL_TASK=1
-REBOOT_TASK=2
-SEND_CONF=3
-START_NODE=4
-RICCI_CONNECT_FAILURE=(-1000)
+INSTALL_TASK = 1
+DISABLE_SVC_TASK = 2
+REBOOT_TASK = 3
+SEND_CONF = 4
+ENABLE_SVC_TASK = 5
+START_NODE = 6
+RICCI_CONNECT_FAILURE = (-1000)
 
-RICCI_CONNECT_FAILURE_MSG="A problem was encountered connecting with this node.  "
+RICCI_CONNECT_FAILURE_MSG = "A problem was encountered connecting with this node.  "
 #cluster/node create error messages
-CLUNODE_CREATE_ERRORS = ["An unknown error occurred when creating this node: ", "A problem occurred when installing packages: ","A problem occurred when rebooting this node: ", "A problem occurred when propagating the configuration to this node: ", "A problem occurred when starting this node: "]
+CLUNODE_CREATE_ERRORS = [
+	"An unknown error occurred when creating this node: ",
+	"A problem occurred when installing packages: ",
+	"A problem occurred when disabling cluster services on this node: ",
+	"A problem occurred when rebooting this node: ",
+	"A problem occurred when propagating the configuration to this node: ",
+	"A problem occurred when enabling cluster services on this node: ",
+	"A problem occurred when starting this node: "
+]
 
 #cluster/node create error status messages
-PRE_INSTALL="The install state is not yet complete"
-PRE_REBOOT="Installation complete, but reboot not yet complete"
-PRE_CFG="Reboot stage successful, but configuration for the cluster is not yet distributed"
-PRE_JOIN="Packages are installed and configuration has been distributed, but the node has not yet joined the cluster."
+PRE_INSTALL = "The install state is not yet complete"
+PRE_REBOOT = "Installation complete, but reboot not yet complete"
+PRE_CFG = "Reboot stage successful, but configuration for the cluster is not yet distributed"
+PRE_JOIN = "Packages are installed and configuration has been distributed, but the node has not yet joined the cluster."
 
 
-POSSIBLE_REBOOT_MESSAGE="This node is not currently responding and is probably<br/>rebooting as planned. This state should persist for 5 minutes or so..."
+POSSIBLE_REBOOT_MESSAGE = "This node is not currently responding and is probably<br/>rebooting as planned. This state should persist for 5 minutes or so..."
 
-REDIRECT_MSG="  You will be redirected in 5 seconds. Please fasten your safety restraints."
+REDIRECT_MSG = " You will be redirected in 5 seconds. Please fasten your safety restraints."
 
 
 # Homebase-specific constants
@@ -128,7 +138,7 @@
 CLUSTER_NODE_NOT_MEMBER = 0x02
 CLUSTER_NODE_ADDED = 0x04
 
-PLONE_ROOT='luci'
+PLONE_ROOT = 'luci'
 
 LUCI_DEBUG_MODE = 1
 LUCI_DEBUG_VERBOSITY = 2
--- conga/luci/site/luci/Extensions/ricci_bridge.py	2006/11/06 23:55:23	1.41
+++ conga/luci/site/luci/Extensions/ricci_bridge.py	2006/11/12 02:10:53	1.42
@@ -28,7 +28,8 @@
 
 	return False
 
-def addClusterNodeBatch(cluster_name,
+def addClusterNodeBatch(os_str,
+						cluster_name,
 						install_base,
 						install_services,
 						install_shared_storage,
@@ -65,13 +66,31 @@
 		
 	need_reboot = install_base or install_services or install_shared_storage or install_LVS
 	if need_reboot:
+		batch += '<module name="service">'
+		batch += '<request API_version="1.0">'
+		batch += '<function_call name="disable">'
+		batch += '<var mutable="false" name="services" type="list_xml">'
+		if os_str == 'rhel4':
+			batch += '<service name="ccsd"/>'
+		batch += '<service name="cman"/>'
+		batch += '</var>'
+		batch += '</function_call>'
+		batch += '</request>'
+		batch += '</module>'
+
 		batch += '<module name="reboot">'
 		batch += '<request API_version="1.0">'
 		batch += '<function_call name="reboot_now"/>'
 		batch += '</request>'
 		batch += '</module>'
 	else:
-		# need placeholder instead of reboot
+		# need 2 placeholders instead of disable services / reboot
+		batch += '<module name="rpm">'
+		batch += '<request API_version="1.0">'
+		batch += '<function_call name="install"/>'
+		batch += '</request>'
+		batch += '</module>'
+
 		batch += '<module name="rpm">'
 		batch += '<request API_version="1.0">'
 		batch += '<function_call name="install"/>'
@@ -95,6 +114,26 @@
 	batch += '</request>'
 	batch += '</module>'
 
+	if need_reboot:
+		batch += '<module name="service">'
+		batch += '<request API_version="1.0">'
+		batch += '<function_call name="enable">'
+		batch += '<var mutable="false" name="services" type="list_xml">'
+		if os_str == 'rhel4':
+			batch += '<service name="ccsd"/>'
+		batch += '<service name="cman"/>'
+		batch += '</var>'
+		batch += '</function_call>'
+		batch += '</request>'
+		batch += '</module>'
+	else:
+		# placeholder instead of enable services
+		batch += '<module name="rpm">'
+		batch += '<request API_version="1.0">'
+		batch += '<function_call name="install"/>'
+		batch += '</request>'
+		batch += '</module>'
+
 	batch += '<module name="cluster">'
 	batch += '<request API_version="1.0">'
 	batch += '<function_call name="start_node"/>'
@@ -142,13 +181,31 @@
 
 	need_reboot = install_base or install_services or install_shared_storage or install_LVS
 	if need_reboot:
+		batch += '<module name="service">'
+		batch += '<request API_version="1.0">'
+		batch += '<function_call name="disable">'
+		batch += '<var mutable="false" name="services" type="list_xml">'
+		if os_str == 'rhel4':
+			batch += '<service name="ccsd"/>'
+		batch += '<service name="cman"/>'
+		batch += '</var>'
+		batch += '</function_call>'
+		batch += '</request>'
+		batch += '</module>'
+
 		batch += '<module name="reboot">'
 		batch += '<request API_version="1.0">'
 		batch += '<function_call name="reboot_now"/>'
 		batch += '</request>'
 		batch += '</module>'
 	else:
-		# need placeholder instead of reboot
+		# need 2 placeholders instead of disable services / reboot
+		batch += '<module name="rpm">'
+		batch += '<request API_version="1.0">'
+		batch += '<function_call name="install"/>'
+		batch += '</request>'
+		batch += '</module>'
+
 		batch += '<module name="rpm">'
 		batch += '<request API_version="1.0">'
 		batch += '<function_call name="install"/>'
@@ -188,6 +245,26 @@
 	batch += '</request>'
 	batch += '</module>'
 
+	if need_reboot:
+		batch += '<module name="service">'
+		batch += '<request API_version="1.0">'
+		batch += '<function_call name="enable">'
+		batch += '<var mutable="false" name="services" type="list_xml">'
+		if os_str == 'rhel4':
+			batch += '<service name="ccsd"/>'
+		batch += '<service name="cman"/>'
+		batch += '</var>'
+		batch += '</function_call>'
+		batch += '</request>'
+		batch += '</module>'
+	else:
+		# placeholder instead of enable services
+		batch += '<module name="rpm">'
+		batch += '<request API_version="1.0">'
+		batch += '<function_call name="install"/>'
+		batch += '</request>'
+		batch += '</module>'
+
 	batch += '<module name="cluster">'
 	batch += '<request API_version="1.0">'
 	batch += '<function_call name="start_node">'
@@ -301,7 +378,7 @@
 def getNodeLogs(rc):
 	errstr = 'log not accessible'
 
-	batch_str = '<module name="log"><request sequence="1254" API_version="1.0"><function_call name="get"><var mutable="false" name="age" type="int" value="18000"/><var mutable="false" name="tags" type="list_str"></var></function_call></request></module>'
+	batch_str = '<module name="log"><request API_version="1.0"><function_call name="get"><var mutable="false" name="age" type="int" value="18000"/><var mutable="false" name="tags" type="list_str"></var></function_call></request></module>'
 
 	ricci_xml = rc.batch_run(batch_str, async=False)
 	if not ricci_xml:
@@ -350,7 +427,7 @@
 	return entry
 
 def nodeReboot(rc):
-	batch_str = '<module name="reboot"><request sequence="111" API_version="1.0"><function_call name="reboot_now"/></request></module>'
+	batch_str = '<module name="reboot"><request API_version="1.0"><function_call name="reboot_now"/></request></module>'
 
 	ricci_xml = rc.batch_run(batch_str)
 	return batchAttemptResult(ricci_xml)
@@ -364,13 +441,13 @@
 	if purge == False:
 		purge_conf = 'false'
 
-	batch_str = '<module name="cluster"><request sequence="111" API_version="1.0"><function_call name="stop_node"><var mutable="false" name="cluster_shutdown" type="boolean" value="' + cshutdown + '"/><var mutable="false" name="purge_conf" type="boolean" value="' + purge_conf + '"/></function_call></request></module>'
+	batch_str = '<module name="cluster"><request API_version="1.0"><function_call name="stop_node"><var mutable="false" name="cluster_shutdown" type="boolean" value="' + cshutdown + '"/><var mutable="false" name="purge_conf" type="boolean" value="' + purge_conf + '"/></function_call></request></module>'
 
 	ricci_xml = rc.batch_run(batch_str)
 	return batchAttemptResult(ricci_xml)
 
 def nodeFence(rc, nodename):
-	batch_str = '<module name="cluster"><request sequence="111" API_version="1.0"><function_call name="fence_node"><var mutable="false" name="nodename" type="string" value="' + nodename + '"/></function_call></request></module>'
+	batch_str = '<module name="cluster"><request API_version="1.0"><function_call name="fence_node"><var mutable="false" name="nodename" type="string" value="' + nodename + '"/></function_call></request></module>'
 
 	ricci_xml = rc.batch_run(batch_str)
 	return batchAttemptResult(ricci_xml)
@@ -380,28 +457,28 @@
 	if cluster_startup == True:
 		cstartup = 'true'
 
-	batch_str = '<module name="cluster"><request sequence="111" API_version="1.0"><function_call name="start_node"><var mutable="false" name="cluster_startup" type="boolean" value="' + cstartup + '"/></function_call></request></module>'
+	batch_str = '<module name="cluster"><request API_version="1.0"><function_call name="start_node"><var mutable="false" name="cluster_startup" type="boolean" value="' + cstartup + '"/></function_call></request></module>'
 
 	ricci_xml = rc.batch_run(batch_str)
 	return batchAttemptResult(ricci_xml)
 
 def startService(rc, servicename, preferrednode=None):
 	if preferrednode != None:
-		batch_str = '<module name="cluster"><request sequence="1254" API_version="1.0"><function_call name="start_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/><var mutable="false" name="nodename" type="string" value=\"' + preferrednode + '\" /></function_call></request></module>'
+		batch_str = '<module name="cluster"><request API_version="1.0"><function_call name="start_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/><var mutable="false" name="nodename" type="string" value=\"' + preferrednode + '\" /></function_call></request></module>'
 	else:
-		batch_str = '<module name="cluster"><request sequence="1254" API_version="1.0"><function_call name="start_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/></function_call></request></module>'
+		batch_str = '<module name="cluster"><request API_version="1.0"><function_call name="start_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/></function_call></request></module>'
 
 	ricci_xml = rc.batch_run(batch_str)
 	return batchAttemptResult(ricci_xml)
 
 def restartService(rc, servicename):
-	batch_str = '<module name="cluster"><request sequence="1254" API_version="1.0"><function_call name="restart_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/></function_call></request></module>'
+	batch_str = '<module name="cluster"><request API_version="1.0"><function_call name="restart_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/></function_call></request></module>'
 
 	ricci_xml = rc.batch_run(batch_str)
 	return batchAttemptResult(ricci_xml)
 
 def stopService(rc, servicename):
-	batch_str = '<module name="cluster"><request sequence="1254" API_version="1.0"><function_call name="stop_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/></function_call></request></module>'
+	batch_str = '<module name="cluster"><request API_version="1.0"><function_call name="stop_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/></function_call></request></module>'
 
 	ricci_xml = rc.batch_run(batch_str)
 	return batchAttemptResult(ricci_xml)
--- conga/luci/site/luci/Extensions/ricci_communicator.py	2006/11/06 23:55:23	1.18
+++ conga/luci/site/luci/Extensions/ricci_communicator.py	2006/11/12 02:10:53	1.19
@@ -34,7 +34,7 @@
             raise RicciError, 'Error connecting to %s:%d: unknown error' \
                     % (self.__hostname, self.__port)
 
-        luci_log.debug_verbose('Connected to %s:%d' \
+        luci_log.debug_verbose('RC:init0: Connected to %s:%d' \
             % (self.__hostname, self.__port))
         try:
             self.ss = ssl(sock, self.__privkey_file, self.__cert_file)
@@ -51,7 +51,7 @@
         # receive ricci header
         hello = self.__receive()
         try:
-            luci_log.debug_verbose('Received header from %s: \"%s\"' \
+            luci_log.debug_verbose('RC:init1: Received header from %s: \"%s\"' \
                 % (self.__hostname, hello.toxml()))
         except:
             pass
@@ -67,34 +67,34 @@
     
     
     def hostname(self):
-        luci_log.debug_verbose('[auth %d] reported hostname = %s' \
+        luci_log.debug_verbose('RC:hostname: [auth %d] reported hostname = %s' \
             % (self.__authed, self.__hostname))
         return self.__hostname
     def authed(self):
-        luci_log.debug_verbose('reported authed = %d for %s' \
+        luci_log.debug_verbose('RC:authed: reported authed = %d for %s' \
             % (self.__authed, self.__hostname))
         return self.__authed
     def system_name(self):
-        luci_log.debug_verbose('[auth %d] reported system_name = %s for %s' \
+        luci_log.debug_verbose('RC:system_name: [auth %d] reported system_name = %s for %s' \
             % (self.__authed, self.__reported_hostname, self.__hostname))
         return self.__reported_hostname
     def cluster_info(self):
-        luci_log.debug_verbose('[auth %d] reported cluster_info = (%s,%s) for %s' \
+        luci_log.debug_verbose('RC:cluster_info: [auth %d] reported cluster_info = (%s,%s) for %s' \
             % (self.__authed, self.__cluname, self.__clualias, self.__hostname))
         return (self.__cluname, self.__clualias)
     def os(self):
-        luci_log.debug_verbose('[auth %d] reported system_name = %s for %s' \
+        luci_log.debug_verbose('RC:os: [auth %d] reported system_name = %s for %s' \
             % (self.__authed, self.__os, self.__hostname))
         return self.__os
     def dom0(self):
-        luci_log.debug_verbose('[auth %d] reported system_name = %s for %s' \
+        luci_log.debug_verbose('RC:dom0: [auth %d] reported system_name = %s for %s' \
             % (self.__authed, self.__dom0, self.__hostname))
         return self.__dom0
     
     
     def auth(self, password):
         if self.authed():
-            luci_log.debug_verbose('already authenticated to %s' \
+            luci_log.debug_verbose('RC:auth0: already authenticated to %s' \
                 % self.__hostname)
             return True
         
@@ -111,7 +111,8 @@
         resp = self.__receive()
         self.__authed = resp.firstChild.getAttribute('authenticated') == 'true'
 
-        luci_log.debug_verbose('auth call returning %d' % self.__authed)
+        luci_log.debug_verbose('RC:auth1: auth call returning %d' \
+			% self.__authed)
         return self.__authed
 
 
@@ -124,26 +125,26 @@
         self.__send(doc)
         resp = self.__receive()
 
-        luci_log.debug_verbose('trying to unauthenticate to %s' \
+        luci_log.debug_verbose('RC:unauth0: trying to unauthenticate to %s' \
             % self.__hostname)
 
         try:
             ret = resp.firstChild.getAttribute('success')
-            luci_log.debug_verbose('unauthenticate returned %s for %s' \
+            luci_log.debug_verbose('RC:unauth1: unauthenticate returned %s for %s' \
                 % (ret, self.__hostname))
             if ret != '0':
                 raise Exception, 'Invalid response'
         except:
             errstr = 'Error authenticating to host %s: %s' \
                         % (self.__hostname, str(ret))
-            luci_log.debug(errstr)
+            luci_log.debug_verbose('RC:unauth2:' + errstr)
             raise RicciError, errstr
         return True
 
 
     def process_batch(self, batch_xml, async=False):
         try:
-            luci_log.debug_verbose('auth=%d to %s for batch %s [async=%d]' \
+            luci_log.debug_verbose('RC:PB0: [auth=%d] to %s for batch %s [async=%d]' \
                 % (self.__authed, self.__hostname, batch_xml.toxml(), async))
         except:
             pass
@@ -169,7 +170,7 @@
         try:
             self.__send(doc)
         except Exception, e:
-            luci_log.debug('Error sending XML \"%s\" to host %s' \
+            luci_log.debug_verbose('RC:PB1: Error sending XML \"%s\" to host %s' \
                 % (doc.toxml(), self.__hostname))
             raise RicciError, 'Error sending XML to host %s: %s' \
                     % (self.__hostname, str(e))
@@ -179,13 +180,13 @@
         # receive response
         doc = self.__receive()
         try:
-            luci_log.debug_verbose('received from %s XML \"%s\"' \
+            luci_log.debug_verbose('RC:PB2: received from %s XML \"%s\"' \
                 % (self.__hostname, doc.toxml()))
         except:
             pass
  
         if doc.firstChild.getAttribute('success') != '0':
-            luci_log.debug_verbose('batch command failed')
+            luci_log.debug_verbose('RC:PB3: batch command failed')
             raise RicciError, 'The last ricci command to host %s failed' \
                     % self.__hostname
         
@@ -195,7 +196,7 @@
                 if node.nodeName == 'batch':
                     batch_node = node.cloneNode(True)
         if batch_node == None:
-            luci_log.debug_verbose('batch node missing <batch/>')
+            luci_log.debug_verbose('RC:PB4: batch node missing <batch/>')
             raise RicciError, 'missing <batch/> in ricci\'s response from %s' \
                     % self.__hostname
 
@@ -204,23 +205,23 @@
     def batch_run(self, batch_str, async=True):
         try:
             batch_xml_str = '<?xml version="1.0" ?><batch>' + batch_str + '</batch>'
-            luci_log.debug_verbose('attempting batch \"%s\" for host %s' \
+            luci_log.debug_verbose('RC:BRun0: attempting batch \"%s\" for host %s' \
                 % (batch_xml_str, self.__hostname))
             batch_xml = minidom.parseString(batch_xml_str).firstChild
         except Exception, e:
-            luci_log.debug('received invalid batch XML for %s: \"%s\": %s' \
+            luci_log.debug_verbose('RC:BRun1: received invalid batch XML for %s: \"%s\": %s' \
                 % (self.__hostname, batch_xml_str, str(e)))
             raise RicciError, 'batch XML is malformed'
 
         try:
             ricci_xml = self.process_batch(batch_xml, async)
             try:
-                luci_log.debug_verbose('received XML \"%s\" from host %s in response to batch command.' \
+                luci_log.debug_verbose('RC:BRun2: received XML \"%s\" from host %s in response to batch command.' \
                     % (ricci_xml.toxml(), self.__hostname))
             except:
                 pass
         except:
-            luci_log.debug('An error occurred while trying to process the batch job: %s' % batch_xml_str)
+            luci_log.debug_verbose('RC:BRun3: An error occurred while trying to process the batch job: \"%s\"' % batch_xml_str)
             return None
 
         doc = minidom.Document()
@@ -228,7 +229,7 @@
         return doc
 
     def batch_report(self, batch_id):
-        luci_log.debug_verbose('[auth=%d] asking for batchid# %s for host %s' \
+        luci_log.debug_verbose('RC:BRep0: [auth=%d] asking for batchid# %s for host %s' \
             % (self.__authed, batch_id, self.__hostname))
 
         if not self.authed():
@@ -271,7 +272,7 @@
             try:
                 pos = self.ss.write(buff)
             except Exception, e:
-                luci_log.debug('Error sending XML \"%s\" to %s: %s' \
+                luci_log.debug_verbose('RC:send0: Error sending XML \"%s\" to %s: %s' \
                     % (buff, self.__hostname, str(e)))
                 raise RicciError, 'write error while sending XML to host %s' \
                         % self.__hostname
@@ -280,7 +281,7 @@
                         % self.__hostname
             buff = buff[pos:]
         try:
-            luci_log.debug_verbose('Sent XML \"%s\" to host %s' \
+            luci_log.debug_verbose('RC:send1: Sent XML \"%s\" to host %s' \
                 % (xml_doc.toxml(), self.__hostname))
         except:
             pass
@@ -302,19 +303,19 @@
                     # we haven't received all of the XML data yet.
                     continue
         except Exception, e:
-            luci_log.debug('Error reading data from %s: %s' \
+            luci_log.debug_verbose('RC:recv0: Error reading data from %s: %s' \
                 % (self.__hostname, str(e)))
             raise RicciError, 'Error reading data from host %s' % self.__hostname
         except:
             raise RicciError, 'Error reading data from host %s' % self.__hostname
-        luci_log.debug_verbose('Received XML \"%s\" from host %s' \
+        luci_log.debug_verbose('RC:recv1: Received XML \"%s\" from host %s' \
             % (xml_in, self.__hostname))
 
         try:
             if doc == None:
                 doc = minidom.parseString(xml_in)
         except Exception, e:
-            luci_log.debug('Error parsing XML \"%s" from %s' \
+            luci_log.debug_verbose('RC:recv2: Error parsing XML \"%s" from %s' \
                 % (xml_in, str(e)))
             raise RicciError, 'Error parsing XML from host %s: %s' \
                     % (self.__hostname, str(e))
@@ -326,7 +327,7 @@
         
         try:        
             if doc.firstChild.nodeName != 'ricci':
-                luci_log.debug('Expecting \"ricci\" got XML \"%s\" from %s' %
+                luci_log.debug_verbose('RC:recv3: Expecting \"ricci\" got XML \"%s\" from %s' %
                     (xml_in, self.__hostname))
                 raise Exception, 'Expecting first XML child node to be \"ricci\"'
         except Exception, e:
@@ -344,7 +345,7 @@
     try:
         return RicciCommunicator(hostname)
     except Exception, e:
-        luci_log.debug('Error creating a ricci connection to %s: %s' \
+        luci_log.debug_verbose('RC:GRC0: Error creating a ricci connection to %s: %s' \
             % (hostname, str(e)))
         return None
     pass
@@ -394,7 +395,7 @@
 def batch_status(batch_xml):
     if batch_xml.nodeName != 'batch':
         try:
-            luci_log.debug('Expecting an XML batch node. Got \"%s\"' \
+            luci_log.debug_verbose('RC:BS0: Expecting an XML batch node. Got \"%s\"' \
                 % batch_xml.toxml())
         except:
             pass
@@ -414,10 +415,10 @@
                     last = last + 1
                     last = last - 2 * last
     try:
-        luci_log.debug_verbose('Returning (%d, %d) for batch_status(\"%s\")' \
+        luci_log.debug_verbose('RC:BS1: Returning (%d, %d) for batch_status(\"%s\")' \
             % (last, total, batch_xml.toxml()))
     except:
-        luci_log.debug_verbose('Returning last, total')
+        luci_log.debug_verbose('RC:BS2: Returning last, total')
 
     return (last, total)
 
@@ -443,7 +444,7 @@
 # * error_msg:  error message
 def extract_module_status(batch_xml, module_num=1):
     if batch_xml.nodeName != 'batch':
-        luci_log.debug('Expecting \"batch\" got \"%s\"' % batch_xml.toxml())
+        luci_log.debug_verbose('RC:EMS0: Expecting \"batch\" got \"%s\"' % batch_xml.toxml())
         raise RicciError, 'Invalid XML node; expecting a batch node'
 
     c = 0



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2006-11-09 20:32 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2006-11-09 20:32 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2006-11-09 20:32:02

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 
	                           conga_constants.py 

Log message:
	fix the cluster start/stop/restart/delete actions in the actions menu so they do what they're supposed to (as opposed to nothing)

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.101&r2=1.102
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.156&r2=1.157
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/conga_constants.py.diff?cvsroot=cluster&r1=1.23&r2=1.24

--- conga/luci/cluster/form-macros	2006/11/07 21:33:52	1.101
+++ conga/luci/cluster/form-macros	2006/11/09 20:32:02	1.102
@@ -89,9 +89,27 @@
 		<td class="cluster cluster_action">
 			<form method="post" onSubmit="return dropdown(this.gourl)">
 				<select name="gourl" id="cluster_action" class="cluster">
-					<option tal:condition="python: 'running' in cstatus and cstatus['running'] != 'true'" value="" class="cluster running">Start this cluster</option>
-					<option tal:condition="python: 'running' in cstatus and cstatus['running'] == 'true'" value="" class="cluster stopped">Stop this cluster</option>
-					<option value="" class="cluster">Restart this cluster</option>
+					<option class="cluster running"
+						tal:condition="python: 'running' in cstatus and cstatus['running'] != 'true'"
+						tal:attributes="value cstatus/start_url | nothing">
+						Start this cluster
+					</option>
+
+					<option class="cluster"
+						tal:attributes="value cstatus/restart_url | nothing">
+						Restart this cluster
+					</option>
+
+					<option class="cluster stopped"
+						tal:condition="python: 'running' in cstatus and cstatus['running'] == 'true'"
+						tal:attributes="value cstatus/stop_url | nothing">
+						Stop this cluster
+					</option>
+
+					<option class="cluster stopped"
+						tal:attributes="value cstatus/delete_url | nothing">
+						Delete this cluster
+					</option>
 				</select>
 				<input class="cluster" type="submit" value="Go" />
 			</form>
@@ -1068,11 +1086,9 @@
 </div>
 
 <div metal:define-macro="clusterprocess-form">
-	<tal:block tal:define="
-		global ricci_agent ri_agent | python: here.getRicciAgentForCluster(request)" />
-
 	<tal:block
-		tal:define="res python: here.processClusterProps(ricci_agent, request)" />
+		tal:define="result python: here.clusterTaskProcess(modelb, request)"/>
+	<h2>Cluster Process Form</h2>
 </div>
 
 <div metal:define-macro="fence-option-list">
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2006/11/09 14:17:08	1.156
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2006/11/09 20:32:02	1.157
@@ -112,7 +112,6 @@
 
 def validateCreateCluster(self, request):
 	errors = list()
-	messages = list()
 	requestResults = {}
 
 	if not havePermCreateCluster(self):
@@ -234,7 +233,7 @@
 		buildClusterCreateFlags(self, batch_id_map, clusterName)
 
 	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clusterName)
+	response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clusterName + '&busyfirst=true')
 
 def buildClusterCreateFlags(self, batch_map, clusterName):
   path = str(CLUSTER_FOLDER_PATH + clusterName)
@@ -379,10 +378,11 @@
 			errors.append('An error occurred while attempting to add cluster node \"' + clunode['host'] + '\"')
 			return (False, {'errors': errors, 'requestResults': cluster_properties})
 
-			messages.append('Cluster join initiated for host \"' + clunode['host'] + '\"')
-
+	messages.append('Cluster join initiated for host \"' + clunode['host'] + '\"')
 	buildClusterCreateFlags(self, batch_id_map, clusterName)
-	return (True, {'errors': errors, 'messages': messages})
+
+	response = request.RESPONSE
+	response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clusterName + '&busyfirst=true')
 
 def validateServiceAdd(self, request):
 	try:
@@ -757,23 +757,14 @@
 				luci_log.debug_verbose('VCC0a: no model, no cluster name')
 				return (False, {'errors': ['No cluster model was found.']})
 
-		rc = getRicciAgent(self, cluname)
-		if not rc:
-			luci_log.debug_verbose('VCCb: no model in session, unable to find a ricci agent for the %s cluster' % cluname)
-			return (False, {'errors': ['No cluster model was found.']})
-
 		try:
-			model = getModelBuilder(None, rc, rc.dom0())
-			if not model:
-				raise Exception, 'model is none'
-		except Exception, e:
-			luci_log.debug_verbose('VCCc: unable to get model builder for cluster %s: %s' % (cluname, str(e)))
+			model = getModelForCluster(self, cluname)
+		except:
 			model = None
 
 		if model is None:
 			luci_log.debug_verbose('VCC0: unable to get model from session')
 			return (False, {'errors': ['No cluster model was found.']})
-
 	try:
 		if not 'configtype' in request.form:
 			luci_log.debug_verbose('VCC2: no configtype')
@@ -853,7 +844,7 @@
 		return (retcode, {'errors': errors, 'messages': messages})
 
 	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername)
+	response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername + '&busyfirst=true')
 
 def validateFenceAdd(self, request):
 	return (True, {})
@@ -1419,7 +1410,7 @@
 
 def getClusterAlias(self, model):
   alias = model.getClusterAlias()
-  if alias == None:
+  if alias is None:
     return model.getClusterName()
   else:
     return alias
@@ -1652,7 +1643,7 @@
 
 			svc = modelb.retrieveServiceByName(item['name'])
 			dom = svc.getAttribute("domain")
-			if dom != None:
+			if dom is not None:
 				itemmap['faildom'] = dom
 			else:
 				itemmap['faildom'] = "No Failover Domain"
@@ -1736,7 +1727,7 @@
 	#first get service by name from model
 	svc = modelb.getService(servicename)
 	resource_list = list()
-	if svc != None:
+	if svc is not None:
 		indent_ctr = 0
 		children = svc.getChildren()
 		for child in children:
@@ -1751,7 +1742,7 @@
 	#Call yourself on every children
 	#then return
 	rc_map = {}
-	if parent != None:
+	if parent is not None:
 		rc_map['parent'] = parent
 	rc_map['name'] = child.getName()
 	if child.isRefObject() == True:
@@ -1968,11 +1959,11 @@
     fdom_map['cfgurl'] = baseurl + "?pagetype=" + FDOM_LIST + "&clustername=" + clustername
     ordered_attr = fdom.getAttribute('ordered')
     restricted_attr = fdom.getAttribute('restricted')
-    if ordered_attr != None and (ordered_attr == "true" or ordered_attr == "1"):
+    if ordered_attr is not None and (ordered_attr == "true" or ordered_attr == "1"):
       fdom_map['ordered'] = True
     else:
       fdom_map['ordered'] = False
-    if restricted_attr != None and (restricted_attr == "true" or restricted_attr == "1"):
+    if restricted_attr is not None and (restricted_attr == "true" or restricted_attr == "1"):
       fdom_map['restricted'] = True
     else:
       fdom_map['restricted'] = False
@@ -1993,7 +1984,7 @@
       else:
         nodesmap['status'] = NODE_INACTIVE
       priority_attr =  node.getAttribute('priority')
-      if priority_attr != None:
+      if priority_attr is not None:
         nodesmap['priority'] = "0"
       nodelist.append(nodesmap)
     fdom_map['nodeslist'] = nodelist
@@ -2006,7 +1997,7 @@
           break  #found more info about service...
 
       domain = svc.getAttribute("domain")
-      if domain != None:
+      if domain is not None:
         if domain == fdom.getName():
           svcmap = {}
           svcmap['name'] = svcname
@@ -2018,47 +2009,52 @@
     fdomlist.append(fdom_map)
   return fdomlist
 
-def processClusterProps(self, ricci_agent, request):
-  #First, retrieve cluster.conf from session
-  conf = request.SESSION.get('conf')
-  model = ModelBuilder(0, None, None, conf)
-
-  #Next, determine actiontype and switch on it
-  actiontype = request[ACTIONTYPE]
-
-  if actiontype == BASECLUSTER:
-    cp = model.getClusterPtr()
-    cfgver = cp.getConfigVersion()
-
-    rcfgver = request['cfgver']
-
-    if cfgver != rcfgver:
-      cint = int(cfgver)
-      rint = int(rcfgver)
-      if rint > cint:
-        cp.setConfigVersion(rcfgver)
-
-    rname = request['cluname']
-    name = model.getClusterAlias()
-
-    if rname != name:
-      cp.addAttribute('alias', rname)
-
-    response = request.RESPONSE
-    response.redirect(request['HTTP_REFERER'] + "&busyfirst=true")
-    return
+def clusterTaskProcess(self, model, request):
+	try:
+		task = request['task']
+	except:
+		try:
+			task = request.form['task']
+		except:
+			luci_log.debug_verbose('CTP1: no task specified')
+			task = None
 
-  elif actiontype == FENCEDAEMON:
-    pass
+	if not model:
+		try:
+			cluname = request['clustername']
+			if not cluname:
+				raise Exception, 'cluname is blank'
+		except:
+			try:
+				cluname = request.form['clustername']
+				if not cluname:
+					raise Exception, 'cluname is blank'
+			except:
+				luci_log.debug_verbose('CTP0: no model/no cluster name')
+				return 'Unable to determine the cluster name.'
+		try:
+			model = getModelForCluster(self, cluname)
+		except Exception, e:
+			luci_log.debug_verbose('CPT1: GMFC failed for %s' % cluname)
+			model = None
 
-  elif actiontype == MULTICAST:
-    pass
+	if not model:
+		return 'Unable to get the model object for %s' % cluname
 
-  elif actiontype == QUORUMD:
-    pass
+	if task == CLUSTER_STOP:
+		clusterStop(self, model)
+	elif task == CLUSTER_START:
+		clusterStart(self, model)
+	elif task == CLUSTER_RESTART:
+		clusterRestart(self, model)
+	elif task == CLUSTER_DELETE:
+		clusterStop(self, model, delete=True)
+	else:
+		return 'An unknown cluster task was requested.'
 
-  else:
-    return
+	response = request.RESPONSE
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		 % (request['URL'], CLUSTER, model.getClusterName()))
 
 def getClusterInfo(self, model, req):
   try:
@@ -2091,7 +2087,6 @@
       luci_log.debug_verbose('GCI3: unable to get model for cluster %s: %s' % cluname, str(e))
       return {}
 
-  baseurl = req['URL'] + "?" + PAGETYPE + "=" + CLUSTER_PROCESS + "&" + CLUNAME + "=" + cluname + "&"
   prop_baseurl = req['URL'] + '?' + PAGETYPE + '=' + CLUSTER_CONFIG + '&' + CLUNAME + '=' + cluname + '&'
   map = {}
   basecluster_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_GENERAL_TAB
@@ -2110,10 +2105,10 @@
   map['fencedaemon_url'] = fencedaemon_url
   fdp = model.getFenceDaemonPtr()
   pjd = fdp.getAttribute('post_join_delay')
-  if pjd == None:
+  if pjd is None:
     pjd = "6"
   pfd = fdp.getAttribute('post_fail_delay')
-  if pfd == None:
+  if pfd is None:
     pfd = "0"
   #post join delay
   map['pjd'] = pjd
@@ -2152,27 +2147,27 @@
   if is_quorumd:
     qdp = model.getQuorumdPtr()
     interval = qdp.getAttribute('interval')
-    if interval != None:
+    if interval is not None:
       map['interval'] = interval
 
     tko = qdp.getAttribute('tko')
-    if tko != None:
+    if tko is not None:
       map['tko'] = tko
 
     votes = qdp.getAttribute('votes')
-    if votes != None:
+    if votes is not None:
       map['votes'] = votes
 
     min_score = qdp.getAttribute('min_score')
-    if min_score != None:
+    if min_score is not None:
       map['min_score'] = min_score
 
     device = qdp.getAttribute('device')
-    if device != None:
+    if device is not None:
       map['device'] = device
 
     label = qdp.getAttribute('label')
-    if label != None:
+    if label is not None:
       map['label'] = label
 
     heuristic_kids = qdp.getChildren()
@@ -2180,24 +2175,24 @@
     for kid in heuristic_kids:
       hmap = {}
       hname = kid.getAttribute('name')
-      if hname == None:
+      if hname is None:
         hname = h_ctr
         h_ctr = h_ctr + 1
       hprog = kid.getAttribute('program')
       hscore = kid.getAttribute('score')
       hinterval = kid.getAttribute('interval')
-      if hprog == None:
+      if hprog is None:
         continue
-      if hname != None:
+      if hname is not None:
         hmap['hname'] = hname
       else:
         hmap['hname'] = ""
       hmap['hprog'] = hprog
-      if hscore != None:
+      if hscore is not None:
         hmap['hscore'] = hscore
       else:
         hmap['hscore'] = ""
-      if hinterval != None:
+      if hinterval is not None:
         hmap['hinterval'] = hinterval
       else:
         hmap['hinterval'] = ""
@@ -2239,6 +2234,12 @@
   map['votes'] = clu['votes']
   map['minquorum'] = clu['minQuorum']
   map['clucfg'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_CONFIG + "&" + CLUNAME + "=" + clustername
+
+  map['restart_url'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_PROCESS + "&" + CLUNAME + "=" + clustername + '&task=' + CLUSTER_RESTART
+  map['stop_url'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_PROCESS + "&" + CLUNAME + "=" + clustername + '&task=' + CLUSTER_STOP
+  map['start_url'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_PROCESS + "&" + CLUNAME + "=" + clustername + '&task=' + CLUSTER_START
+  map['delete_url'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_PROCESS + "&" + CLUNAME + "=" + clustername + '&task=' + CLUSTER_DELETE
+
   svc_dict_list = list()
   for svc in svclist:
       svc_dict = {}
@@ -2270,6 +2271,317 @@
 
   return map
 
+def nodeLeave(self, rc, clustername, nodename_resolved):
+	path = str(CLUSTER_FOLDER_PATH + clustername + '/' + nodename_resolved)
+
+	try:
+		nodefolder = self.restrictedTraverse(path)
+		if not nodefolder:
+			raise Exception, 'cannot find database object at %s' % path
+	except Exception, e:
+		luci_log.debug('NLO: node_leave_cluster err: %s' % str(e))
+		return None
+
+	objname = str(nodename_resolved + "____flag")
+	fnpresent = noNodeFlagsPresent(self, nodefolder, objname, nodename_resolved)
+
+	if fnpresent is None:
+		luci_log.debug('NL1: An error occurred while checking flags for %s' \
+			% nodename_resolved)
+		return None
+
+	if fnpresent == False:
+		luci_log.debug('NL2: flags are still present for %s -- bailing out' \
+			% nodename_resolved)
+		return None
+
+	batch_number, result = nodeLeaveCluster(rc)
+	if batch_number is None or result is None:
+		luci_log.debug_verbose('NL3: nodeLeaveCluster error: batch_number and/or result is None')
+		return None
+
+	try:
+		set_node_flag(self, clustername, rc.hostname(), batch_number, NODE_LEAVE_CLUSTER, "Node \'%s\' leaving cluster" % nodename_resolved)
+	except Exception, e:
+		luci_log.debug_verbose('NL4: failed to set flags: %s' % str(e))
+	return True
+
+def nodeJoin(self, rc, clustername, nodename_resolved):
+	batch_number, result = nodeJoinCluster(rc)
+	if batch_number is None or result is None:
+		luci_log.debug_verbose('NJ0: batch_number and/or result is None')
+		return None
+
+	try:
+		set_node_flag(self, clustername, rc.hostname(), batch_number, NODE_JOIN_CLUSTER, "Node \'%s\' joining cluster" % nodename_resolved)
+	except Exception, e:
+		luci_log.debug_verbose('NJ1: failed to set flags: %s' % str(e))
+	return True
+
+def clusterStart(self, model):
+	if model is None:
+		return None
+
+	clustername = model.getClusterName()
+	nodes = model.getNodes()
+	if not nodes or len(nodes) < 1:
+		return None
+
+	errors = 0
+	for node in nodes:
+		nodename = node.getName().strip()
+		nodename_resolved = resolve_nodename(self, clustername, nodename)
+
+		try:
+			rc = RicciCommunicator(nodename_resolved)
+		except Exception, e:
+			luci_log.debug_verbose('CStart: RC %s: %s' \
+				% (nodename_resolved, str(e)))
+			errors += 1
+			continue
+		if nodeJoin(self, rc, clustername, nodename_resolved) is None:
+			luci_log.debug_verbose('CStart1: nodeLeave %s' % nodename_resolved)
+			errors += 1
+
+	return errors
+
+def clusterStop(self, model, delete=False):
+	if model is None:
+		return None
+
+	clustername = model.getClusterName()
+	nodes = model.getNodes()
+	if not nodes or len(nodes) < 1:
+		return None
+
+	errors = 0
+	for node in nodes:
+		nodename = node.getName().strip()
+		nodename_resolved = resolve_nodename(self, clustername, nodename)
+
+		try:
+			rc = RicciCommunicator(nodename_resolved)
+		except Exception, e:
+			luci_log.debug_verbose('[%d] CStop0: RC %s: %s' \
+				% (delete, nodename_resolved, str(e)))
+			errors += 1
+			continue
+		if nodeLeave(self, rc, clustername, nodename_resolved) is None:
+			luci_log.debug_verbose('[%d] CStop1: nodeLeave %s' \
+				% (delete, nodename_resolved))
+			errors += 1
+	return errors
+
+def clusterRestart(self, model):
+	snum_err = clusterStop(self, model)
+	if snum_err:
+		luci_log.debug_verbose('cluRestart0: clusterStop: %d errs' % snum_err)
+	jnum_err = clusterStart(self, model)
+	if jnum_err:
+		luci_log.debug_verbose('cluRestart0: clusterStart: %d errs' % jnum_err)
+	return snum_err + jnum_err
+
+def clusterDelete(self, model):
+	return clusterStop(self, model, delete=True)
+
+def forceNodeReboot(self, rc, clustername, nodename_resolved):
+	batch_number, result = nodeReboot(rc)
+	if batch_number is None or result is None:
+		luci_log.debug_verbose('FNR0: batch_number and/or result is None')
+		return None
+
+	try:
+		set_node_flag(self, clustername, rc.hostname(), batch_number, NODE_REBOOT, "Node \'%s\' is being rebooted" % nodename_resolved)
+	except Exception, e:
+		luci_log.debug_verbose('FNR1: failed to set flags: %s' % str(e))
+	return True
+
+def forceNodeFence(self, clustername, nodename, nodename_resolved):
+	path = str(CLUSTER_FOLDER_PATH + clustername)
+
+	try:
+		clusterfolder = self.restrictedTraverse(path)
+		if not clusterfolder:
+			raise Exception, 'no cluster folder at %s' % path
+	except Exception, e:
+		luci_log.debug('FNF0: The cluster folder %s could not be found: %s' \
+			 % (clustername, str(e)))
+		return None
+
+	try:
+		nodes = clusterfolder.objectItems('Folder')
+		if not nodes or len(nodes) < 1:
+			raise Exception, 'no cluster nodes'
+	except Exception, e:
+		luci_log.debug('FNF1: No cluster nodes for %s were found: %s' \
+			% (clustername, str(e)))
+		return None
+
+	found_one = False
+	for node in nodes:
+		if node[1].getId().find(nodename) != (-1):
+			continue
+
+		try:
+			rc = RicciCommunicator(node[1].getId())
+			if not rc:
+				raise Exception, 'rc is None'
+		except Exception, e:
+			luci_log.debug('FNF2: ricci error for host %s: %s' \
+				% (node[0], str(e)))
+			continue
+
+		if not rc.authed():
+			rc = None
+			try:
+				snode = getStorageNode(self, node[1].getId())
+				setNodeFlag(snode, CLUSTER_NODE_NEED_AUTH)
+			except:
+				pass
+
+			try:
+				setNodeFlag(node[1], CLUSTER_NODE_NEED_AUTH)
+			except:
+				pass
+
+			continue
+		found_one = True
+		break
+
+	if not found_one:
+		return None
+
+	batch_number, result = nodeFence(rc, nodename)
+	if batch_number is None or result is None:
+		luci_log.debug_verbose('FNF3: batch_number and/or result is None')
+		return None
+
+	try:
+		set_node_flag(self, clustername, rc.hostname(), batch_number, NODE_FENCE, "Node \'%s\' is being fenced" % nodename_resolved)
+	except Exception, e:
+		luci_log.debug_verbose('FNF4: failed to set flags: %s' % str(e))
+	return True
+
+def nodeDelete(self, rc, model, clustername, nodename, nodename_resolved):
+	#We need to get a node name other than the node
+	#to be deleted, then delete the node from the cluster.conf
+	#and propogate it. We will need two ricci agents for this task.
+
+	# Make sure we can find a second node before we hose anything.
+	path = str(CLUSTER_FOLDER_PATH + clustername)
+	try:
+		clusterfolder = self.restrictedTraverse(path)
+		if not clusterfolder:
+			raise Exception, 'no cluster folder at %s' % path
+	except Exception, e:
+		luci_log.debug_verbose('ND0: node delete error for cluster %s: %s' \
+				% (clustername, str(e)))
+		return None
+
+	try:
+		nodes = clusterfolder.objectItems('Folder')
+		if not nodes or len(nodes) < 1:
+			raise Exception, 'no cluster nodes in DB'
+	except Exception, e:
+		luci_log.debug_verbose('ND1: node delete error for cluster %s: %s' \
+			% (clustername, str(e)))
+
+	found_one = False
+	for node in nodes:
+		if node[1].getId().find(nodename) != (-1):
+			continue
+		#here we make certain the node is up...
+		# XXX- we should also make certain this host is still
+		# in the cluster we believe it is.
+		try:
+			rc2 = RicciCommunicator(node[1].getId())
+		except Exception, e:
+			luci_log.info('ND2: ricci %s error: %s' % (node[0], str(e)))
+			continue
+
+		if not rc2.authed():
+			try:
+				setNodeFlag(node[1], CLUSTER_NODE_NEED_AUTH)
+			except:
+				pass
+
+			try:
+				snode = getStorageNode(self, node[0])
+				setNodeFlag(snode, CLUSTER_NODE_NEED_AUTH)
+			except:
+				pass
+
+			luci_log.debug_verbose('ND3: %s is not authed' % node[0])
+			rc2 = None
+			continue
+		else:
+			found_one = True
+			break
+
+	if not found_one:
+		luci_log.debug_verbose('ND4: unable to find ricci agent to delete %s from %s' % (nodename, clustername))
+		return None
+
+	#First, delete cluster.conf from node to be deleted.
+	#next, have node leave cluster.
+	batch_number, result = nodeLeaveCluster(rc, purge=True)
+	if batch_number is None or result is None:
+		luci_log.debug_verbose('ND5: batch_number and/or result is None')
+		return None
+
+	#It is not worth flagging this node in DB, as we are going
+	#to delete it anyway. Now, we need to delete node from model
+	#and send out new cluster.conf
+	delete_target = None
+	nodelist = model.getNodes()
+	find_node = lower(nodename)
+	for n in nodelist:
+		try:
+			if lower(n.getName()) == find_node:
+				delete_target = n
+				break
+		except:
+			continue
+
+	if delete_target is None:
+		luci_log.debug_verbose('ND6: unable to find delete target for %s in %s' \
+			% (nodename, clustername))
+		return None
+
+	model.deleteNode(delete_target)
+
+	try:
+		str_buf = model.exportModelAsString()
+		if not str_buf:
+			raise Exception, 'model string is blank'
+	except Exception, e:
+		luci_log.debug_verbose('ND7: exportModelAsString: %s' % str(e))
+		return None
+
+	# propagate the new cluster.conf via the second node
+	batch_number, result = setClusterConf(rc2, str(str_buf))
+	if batch_number is None:
+		luci_log.debug_verbose('ND8: batch number is None after del node in NTP')
+		return None
+
+	#Now we need to delete the node from the DB
+	path = str(CLUSTER_FOLDER_PATH + clustername)
+	del_path = str(path + '/' + nodename_resolved)
+
+	try:
+		delnode = self.restrictedTraverse(del_path)
+		clusterfolder = self.restrictedTraverse(path)
+		clusterfolder.manage_delObjects(delnode[0])
+	except Exception, e:
+		luci_log.debug_verbose('ND9: error deleting %s: %s' \
+			% (del_path, str(e)))
+
+	try:
+		set_node_flag(self, clustername, rc2.hostname(), batch_number, NODE_DELETE, "Deleting node \'%s\'" % nodename_resolved)
+	except Exception, e:
+		luci_log.debug_verbose('ND10: failed to set flags: %s' % str(e))
+	return True
+
 def nodeTaskProcess(self, model, request):
 	try:
 		clustername = request['clustername']
@@ -2345,312 +2657,41 @@
 			return None
 
 	if task == NODE_LEAVE_CLUSTER:
-		path = str(CLUSTER_FOLDER_PATH + clustername + "/" + nodename_resolved)
-
-		try:
-			nodefolder = self.restrictedTraverse(path)
-			if not nodefolder:
-				raise Exception, 'cannot find directory at %s' % path
-		except Exception, e:
-			luci_log.debug('node_leave_cluster err: %s' % str(e))
-			return None
-
-		objname = str(nodename_resolved + "____flag")
-
-		fnpresent = noNodeFlagsPresent(self, nodefolder, objname, nodename_resolved)
-		if fnpresent is None:
-			luci_log.debug('An error occurred while checking flags for %s' \
-				% nodename_resolved)
+		if nodeLeave(self, rc, clustername, nodename_resolved) is None:
+			luci_log.debug_verbose('NTP: nodeLeave failed')
 			return None
 
-		if fnpresent == False:
-			luci_log.debug('flags are still present for %s -- bailing out' \
-				% nodename_resolved)
-			return None
-
-		batch_number, result = nodeLeaveCluster(rc)
-		if batch_number is None or result is None:
-			luci_log.debug_verbose('nodeLeaveCluster error: batch_number and/or result is None')
-			return None
-
-		batch_id = str(batch_number)
-		objpath = str(path + "/" + objname)
-
-		try:
-			nodefolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-			#Now we need to annotate the new DB object
-			flag = self.restrictedTraverse(objpath)
-			flag.manage_addProperty(BATCH_ID, batch_id, "string")
-			flag.manage_addProperty(TASKTYPE, NODE_LEAVE_CLUSTER, "string")
-			flag.manage_addProperty(FLAG_DESC, "Node \'" + nodename + "\' leaving cluster", "string")
-		except:
-			luci_log.debug('An error occurred while setting flag %s' % objpath)
-
-		response = request.RESPONSE
 		#Is this correct? Should we re-direct to the cluster page?
+		response = request.RESPONSE
 		response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername)
 	elif task == NODE_JOIN_CLUSTER:
-		batch_number, result = nodeJoinCluster(rc)
-		if batch_number is None or result is None:
-			luci_log.debug_verbose('nodeJoin error: batch_number and/or result is None')
+		if nodeJoin(self, rc, clustername, nodename_resolved) is None:
+			luci_log.debug_verbose('NTP: nodeJoin failed')
 			return None
 
-		path = str(CLUSTER_FOLDER_PATH + clustername + "/" + nodename_resolved)
-		batch_id = str(batch_number)
-		objname = str(nodename_resolved + "____flag")
-		objpath = str(path + "/" + objname)
-
-		try:
-			nodefolder = self.restrictedTraverse(path)
-			nodefolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-			#Now we need to annotate the new DB object
-			flag = self.restrictedTraverse(objpath)
-			flag.manage_addProperty(BATCH_ID, batch_id, "string")
-			flag.manage_addProperty(TASKTYPE, NODE_JOIN_CLUSTER, "string")
-			flag.manage_addProperty(FLAG_DESC, "Node \'" + nodename + "\' joining cluster", "string")
-		except Exception, e:
-			luci_log.debug_verbose('nodeJoin error: creating flags at %s: %s' \
-				% (path, str(e)))
-
-		response = request.RESPONSE
 		#Once again, is this correct? Should we re-direct to the cluster page?
+		response = request.RESPONSE
 		response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername)
 	elif task == NODE_REBOOT:
-		batch_number, result = nodeReboot(rc)
-		if batch_number is None or result is None:
-			luci_log.debug_verbose('nodeReboot: batch_number and/or result is None')
+		if forceNodeReboot(self, rc, clustername, nodename_resolved) is None:
+			luci_log.debug_verbose('NTP: nodeReboot failed')
 			return None
 
-		path = str(CLUSTER_FOLDER_PATH + clustername + "/" + nodename_resolved)
-		batch_id = str(batch_number)
-		objname = str(nodename_resolved + "____flag")
-		objpath = str(path + "/" + objname)
-
-		try:
-			nodefolder = self.restrictedTraverse(path)
-			nodefolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-			#Now we need to annotate the new DB object
-			flag = self.restrictedTraverse(objpath)
-			flag.manage_addProperty(BATCH_ID, batch_id, "string")
-			flag.manage_addProperty(TASKTYPE, NODE_REBOOT, "string")
-			flag.manage_addProperty(FLAG_DESC, "Node \'" + nodename + "\' is being rebooted", "string")
-		except Exception, e:
-			luci_log.debug_verbose('nodeReboot err: creating flags at %s: %s' \
-				% (path, str(e)))
-
-		response = request.RESPONSE
 		#Once again, is this correct? Should we re-direct to the cluster page?
+		response = request.RESPONSE
 		response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername)
 	elif task == NODE_FENCE:
-		#here, we DON'T want to open connection to node to be fenced.
-		path = str(CLUSTER_FOLDER_PATH + clustername)
-		try:
-			clusterfolder = self.restrictedTraverse(path)
-			if not clusterfolder:
-				raise Exception, 'no cluster folder at %s' % path
-		except Exception, e:
-			luci_log.debug('The cluster folder for %s could not be found: %s' \
-				 % (clustername, str(e)))
+		if forceNodeFence(self, clustername, nodename, nodename_resolved) is None:
+			luci_log.debug_verbose('NTP: nodeFencefailed')
 			return None
 
-		try:
-			nodes = clusterfolder.objectItems('Folder')
-			if not nodes or len(nodes) < 1:
-				raise Exception, 'no cluster nodes'
-		except Exception, e:
-			luci_log.debug('No cluster nodes for %s were found: %s' \
-				% (clustername, str(e)))
-			return None
-
-		found_one = False
-		for node in nodes:
-			if node[1].getId().find(nodename) != (-1):
-				continue
-
-			try:
-				rc = RicciCommunicator(node[1].getId())
-				if not rc:
-					raise Exception, 'rc is None'
-			except Exception, e:
-				luci_log.debug('ricci error for host %s: %s' \
-					% (node[0], str(e)))
-				continue
-
-			if not rc.authed():
-				rc = None
-				try:
-					snode = getStorageNode(self, node[1].getId())
-					setNodeFlag(snode, CLUSTER_NODE_NEED_AUTH)
-				except:
-					pass
-
-				try:
-					setNodeFlag(node[1], CLUSTER_NODE_NEED_AUTH)
-				except:
-					pass
-
-				continue
-			found_one = True
-			break
-
-		if not found_one:
-			return None
-
-		batch_number, result = nodeFence(rc, nodename)
-		if batch_number is None or result is None:
-			luci_log.debug_verbose('nodeFence: batch_number and/or result is None')
-			return None
-
-		path = str(path + "/" + nodename_resolved)
-		batch_id = str(batch_number)
-		objname = str(nodename_resolved + "____flag")
-		objpath = str(path + "/" + objname)
-
-		try:
-			nodefolder = self.restrictedTraverse(path)
-			nodefolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-			#Now we need to annotate the new DB object
-			flag = self.restrictedTraverse(objpath)
-			flag.manage_addProperty(BATCH_ID, batch_id, "string")
-			flag.manage_addProperty(TASKTYPE, NODE_FENCE, "string")
-			flag.manage_addProperty(FLAG_DESC, "Node \'" + nodename + "\' is being fenced", "string")
-		except Exception, e:
-			luci_log.debug_verbose('nodeFence err: creating flags at %s: %s' \
-				% (path, str(e)))
-
-		response = request.RESPONSE
 		#Once again, is this correct? Should we re-direct to the cluster page?
+		response = request.RESPONSE
 		response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername)
 	elif task == NODE_DELETE:
-		#We need to get a node name other than the node
-		#to be deleted, then delete the node from the cluster.conf
-		#and propogate it. We will need two ricci agents for this task.
-
-		# Make sure we can find a second node before we hose anything.
-		path = str(CLUSTER_FOLDER_PATH + clustername)
-		try:
-			clusterfolder = self.restrictedTraverse(path)
-			if not clusterfolder:
-				raise Exception, 'no cluster folder at %s' % path
-		except Exception, e:
-			luci_log.debug_verbose('node delete error for cluster %s: %s' \
-				% (clustername, str(e)))
-			return None
-
-		try:
-			nodes = clusterfolder.objectItems('Folder')
-			if not nodes or len(nodes) < 1:
-				raise Exception, 'no cluster nodes in DB'
-		except Exception, e:
-			luci_log.debug_verbose('node delete error for cluster %s: %s' \
-				% (clustername, str(e)))
-
-		found_one = False
-		for node in nodes:
-			if node[1].getId().find(nodename) != (-1):
-				continue
-			#here we make certain the node is up...
-			# XXX- we should also make certain this host is still
-			# in the cluster we believe it is.
-			try:
-				rc2 = RicciCommunicator(node[1].getId())
-			except Exception, e:
-				luci_log.info('ricci %s error: %s' % (node[0], str(e)))
-				continue
-			except:
-				continue
-
-			if not rc2.authed():
-				try:
-					setNodeFlag(node[1], CLUSTER_NODE_NEED_AUTH)
-				except:
-					pass
-
-				try:
-					snode = getStorageNode(self, node[0])
-					setNodeFlag(snode, CLUSTER_NODE_NEED_AUTH)
-				except:
-					pass
-
-				luci_log.debug_verbose('%s is not authed' % node[0])
-				rc2 = None
-				continue
-			else:
-				found_one = True
-				break
-
-		if not found_one:
-			luci_log.debug_verbose('unable to find ricci node to delete %s from %s' % (nodename, clustername))
+		if nodeDelete(self, rc, model, clustername, nodename, nodename_resolved) is None:
+			luci_log.debug_verbose('NTP: nodeDelete failed')
 			return None
-
-		#First, delete cluster.conf from node to be deleted.
-		#next, have node leave cluster.
-		batch_number, result = nodeLeaveCluster(rc, purge=True)
-		if batch_number is None or result is None:
-			luci_log.debug_verbose('nodeDelete: batch_number and/or result is None')
-			return None
-
-		#It is not worth flagging this node in DB, as we are going
-		#to delete it anyway. Now, we need to delete node from model
-		#and send out new cluster.conf
-		delete_target = None
-		nodelist = model.getNodes()
-		find_node = lower(nodename)
-		for n in nodelist:
-			try:
-				if lower(n.getName()) == find_node:
-					delete_target = n
-					break
-			except:
-				continue
-
-		if delete_target is None:
-			luci_log.debug_verbose('unable to find delete target for %s in %s' \
-				% (nodename, clustername))
-			return None
-
-		model.deleteNode(delete_target)
-
-		try:
-			str_buf = model.exportModelAsString()
-			if not str_buf:
-				raise Exception, 'model string is blank'
-		except Exception, e:
-			luci_log.debug_verbose('NTP exportModelAsString: %s' % str(e))
-			return None
-
-		# propagate the new cluster.conf via the second node
-		batch_number, result = setClusterConf(rc2, str(str_buf))
-		if batch_number is None:
-			luci_log.debug_verbose('batch number is None after del node in NTP')
-			return None
-
-		#Now we need to delete the node from the DB
-		path = str(CLUSTER_FOLDER_PATH + clustername)
-		del_path = str(path + "/" + nodename_resolved)
-
-		try:
-			delnode = self.restrictedTraverse(del_path)
-			clusterfolder = self.restrictedTraverse(path)
-			clusterfolder.manage_delObjects(delnode[0])
-		except Exception, e:
-			luci_log.debug_verbose('error deleting %s: %s' % (del_path, str(e)))
-
-		batch_id = str(batch_number)
-		objname = str(nodename_resolved + "____flag")
-		objpath = str(path + "/" + objname)
-
-		try:
-			clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-			#Now we need to annotate the new DB object
-			flag = self.restrictedTraverse(objpath)
-			flag.manage_addProperty(BATCH_ID, batch_id, "string")
-			flag.manage_addProperty(TASKTYPE, NODE_DELETE, "string")
-			flag.manage_addProperty(FLAG_DESC, "Deleting node \'" + nodename + "\'", "string")
-		except Exception, e:
-			luci_log.debug_verbose('nodeDelete %s err setting flag at %s: %s' \
-				% (nodename, objpath, str(e)))
-
 		response = request.RESPONSE
 		response.redirect(request['HTTP_REFERER'] + "&busyfirst=true")
 
@@ -2951,7 +2992,8 @@
       except:
         fd = None #Set to None in case last time thru loop
         continue
-      if fd != None:
+
+      if fd is not None:
         if fd.isShared() == False:  #Not a shared dev...build struct and add
           fencedev = {}
           fencedev['prettyname'] = FENCE_OPTS[fd.getAgentType()]
@@ -2974,7 +3016,7 @@
           last_kid_fd = None
           level1.append(fencedev)
         else:  #This dev is shared
-          if (last_kid_fd != None) and (fd.getName().strip() == last_kid_fd.getName().strip()):  #just append a new instance struct to last_kid_fd
+          if (last_kid_fd is not None) and (fd.getName().strip() == last_kid_fd.getName().strip()):  #just append a new instance struct to last_kid_fd
             instance_struct = {}
             instance_struct['id'] = str(minor_num)
             minor_num = minor_num + 1
@@ -3045,7 +3087,7 @@
       except:
         fd = None #Set to None in case last time thru loop
         continue
-      if fd != None:
+      if fd is not None:
         if fd.isShared() == False:  #Not a shared dev...build struct and add
           fencedev = {}
           fencedev['prettyname'] = FENCE_OPTS[fd.getAgentType()]
@@ -3068,7 +3110,7 @@
           last_kid_fd = None
           level2.append(fencedev)
         else:  #This dev is shared
-          if (last_kid_fd != None) and (fd.getName().strip() == last_kid_fd.getName().strip()):  #just append a new instance struct to last_kid_fd
+          if (last_kid_fd is not None) and (fd.getName().strip() == last_kid_fd.getName().strip()):  #just append a new instance struct to last_kid_fd
             instance_struct = {}
             instance_struct['id'] = str(minor_num)
             minor_num = minor_num + 1
@@ -3584,7 +3626,7 @@
 
 def getResourceInfo(modelb, request):
 	if not modelb:
-		luci_log.debug_verbose('no modelb obj in getResourceInfo')
+		luci_log.debug_verbose('GRI0: no modelb object in session')
 		return {}
 
 	name = None
@@ -4539,6 +4581,24 @@
 		modelb.setIsVirtualized(isVirtualized)
 	return modelb
 
+def getModelForCluster(self, clustername):
+	rc = getRicciAgent(self, clustername)
+	if not rc:
+		luci_log.debug_verbose('GMFC0: unable to find a ricci agent for %s' \
+			% clustername)
+		return None
+
+	try:
+		model = getModelBuilder(None, rc, rc.dom0())
+		if not model:
+			raise Exception, 'model is none'
+	except Exception, e:
+		luci_log.debug_verbose('GMFC1: unable to get model builder for %s: %s' \
+			 % (clustername, str(e)))
+		return None
+
+	return model
+
 def set_node_flag(self, cluname, agent, batchid, task, desc):
 	path = str(CLUSTER_FOLDER_PATH + cluname)
 	batch_id = str(batchid)
@@ -4551,7 +4611,7 @@
 		flag = self.restrictedTraverse(objpath)
 		flag.manage_addProperty(BATCH_ID, batch_id, 'string')
 		flag.manage_addProperty(TASKTYPE, task, 'string')
-		flag.manage_addProperty(FLAG_DESC, desc)
+		flag.manage_addProperty(FLAG_DESC, desc, 'string')
 	except Exception, e:
 		errmsg = 'Error creating flag (%s,%s,%s)@%s: %s' \
 					% (batch_id, task, desc, objpath, str(e))
--- conga/luci/site/luci/Extensions/conga_constants.py	2006/11/06 23:55:23	1.23
+++ conga/luci/site/luci/Extensions/conga_constants.py	2006/11/09 20:32:02	1.24
@@ -43,6 +43,12 @@
 FENCEDEV_CONFIG="53"
 FENCEDEV="54"
 
+#Cluster tasks
+CLUSTER_STOP = '1000'
+CLUSTER_START = '1001'
+CLUSTER_RESTART = '1002'
+CLUSTER_DELETE = '1003'
+
 #General tasks
 NODE_LEAVE_CLUSTER="100"
 NODE_JOIN_CLUSTER="101"



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2006-11-03 22:48 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2006-11-03 22:48 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2006-11-03 22:48:15

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 
	                           conga_constants.py 

Log message:
	fix a couple of fence bugs and clean up the configuration validation in preparation for more fixes

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.97&r2=1.98
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.143&r2=1.144
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/conga_constants.py.diff?cvsroot=cluster&r1=1.21&r2=1.22

--- conga/luci/cluster/form-macros	2006/11/03 21:47:26	1.97
+++ conga/luci/cluster/form-macros	2006/11/03 22:48:14	1.98
@@ -394,6 +394,8 @@
 		</script>
 
 		<form name="basecluster" action="" method="post">
+			<input type="hidden" name="cluster_version"
+				tal:attributes="value os_version | nothing" />
 			<input type="hidden" name="pagetype"
 				tal:attributes="value request/pagetype | request/form/pagetype"
 			/>
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2006/11/03 21:13:25	1.143
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2006/11/03 22:48:15	1.144
@@ -21,6 +21,7 @@
 from Samba import Samba
 from FenceHandler import FenceHandler
 from clusterOS import resolveOSType
+from FenceHandler import FENCE_OPTS
 from GeneralError import GeneralError
 from UnknownClusterError import UnknownClusterError
 from homebase_adapters import nodeUnauth, nodeAuth, manageCluster, createClusterSystems, havePermCreateCluster, setNodeFlag, delNodeFlag, userAuthenticated, getStorageNode, getClusterNode
@@ -34,8 +35,6 @@
 #then only display chooser if the current user has
 #permissions on at least one. If the user is admin, show ALL clusters
 
-CLUSTER_FOLDER_PATH = '/luci/systems/cluster/'
-
 try:
 	luci_log = LuciSyslog()
 except LuciSyslogError, e:
@@ -481,17 +480,17 @@
 	
 ## Cluster properties form validation routines
 
-def validateMCastConfig(self, form):
+def validateMCastConfig(model, form):
 	try:
 		mcast_val = form['mcast'].strip().lower()
 		if mcast_val != 'true' and mcast_val != 'false':
-			raise KeyError(mcast_val)
+			raise KeyError, mcast_val
 		if mcast_val == 'true':
 			mcast_val = 1
 		else:
 			mcast_val = 0
 	except KeyError, e:
-		return (False, {'errors': ['An invalid multicast selection was made.']})
+		return (False, {'errors': ['An invalid multicast selection was made']})
 
 	if not mcast_val:
 		return (True, {'messages': ['Changes accepted. - FILL ME IN']})
@@ -504,12 +503,12 @@
 	except socket.error, e:
 		try:
 			socket.inet_pton(socket.AF_INET6, addr_str)
-		except socket.error, e6:
-			return (False, {'errors': ['An invalid multicast address was given: ' + e]})
+		except socket.error, e:
+			return (False, {'errors': ['An invalid multicast address was given: %s' % str(e)]})
 
 	return (True, {'messages': ['Changes accepted. - FILL ME IN']})
 
-def validateQDiskConfig(self, form):
+def validateQDiskConfig(model, form):
 	errors = list()
 
 	try:
@@ -521,7 +520,7 @@
 		else:
 			qdisk_val = 0
 	except KeyError, e:
-		return (False, {'errors': ['An invalid quorum partition selection was made.']})
+		return (False, {'errors': ['An invalid quorum partition selection was made']})
 
 	if not qdisk_val:
 		return (True, {'messages': ['Changes accepted. - FILL ME IN']})
@@ -529,64 +528,64 @@
 	try:
 		interval = int(form['interval'])
 		if interval < 0:
-			raise ValueError('Interval must be 0 or greater.')
+			raise ValueError, 'Interval must be 0 or greater'
 	except KeyError, e:
-		errors.append('No Interval value was given.')
+		errors.append('No Interval value was given')
 	except ValueError, e:
-		errors.append('An invalid Interval value was given: ' + e)
+		errors.append('An invalid Interval value was given: %s' % str(e))
 
 	try:
 		votes = int(form['votes'])
 		if votes < 1:
-			raise ValueError('Votes must be greater than 0')
+			raise ValueError, 'Votes must be greater than 0'
 	except KeyError, e:
-		errors.append('No Votes value was given.')
+		errors.append('No Votes value was given')
 	except ValueError, e:
-		errors.append('An invalid Votes value was given: ' + e)
+		errors.append('An invalid Votes value was given: %s' % str(e))
 
 	try:
 		tko = int(form['tko'])
 		if tko < 0:
-			raise ValueError('TKO must be 0 or greater')
+			raise ValueError, 'TKO must be 0 or greater'
 	except KeyError, e:
-		errors.append('No TKO value was given.')
+		errors.append('No TKO value was given')
 	except ValueError, e:
-		errors.append('An invalid TKO value was given: ' + e)
+		errors.append('An invalid TKO value was given: %s' % str(e))
 
 	try:
 		min_score = int(form['min_score'])
 		if min_score < 1:
 			raise ValueError('Minimum Score must be greater than 0')
 	except KeyError, e:
-		errors.append('No Minimum Score value was given.')
+		errors.append('No Minimum Score value was given')
 	except ValueError, e:
-		errors.append('An invalid Minimum Score value was given: ' + e)
+		errors.append('An invalid Minimum Score value was given: %s' % str(e))
 
 	try:
 		device = form['device'].strip()
 		if not device:
-			raise KeyError('device')
+			raise KeyError, 'device is none'
 	except KeyError, e:
-		errors.append('No Device value was given.')
+		errors.append('No Device value was given')
 
 	try:
 		label = form['label'].strip()
 		if not label:
-			raise KeyError('label')
+			raise KeyError, 'label is none'
 	except KeyError, e:
-		errors.append('No Label value was given.')
+		errors.append('No Label value was given')
 
 	num_heuristics = 0
 	try:
 		num_heuristics = int(form['num_heuristics'])
 		if num_heuristics < 0:
-			raise ValueError(form['num_heuristics'])
+			raise ValueError, 'invalid number of heuristics: %s' % form['num_heuristics']
 		if num_heuristics == 0:
 			num_heuristics = 1
 	except KeyError, e:
 		errors.append('No number of heuristics was given.')
 	except ValueError, e:
-		errors.append('An invalid number of heuristics was given: ' + e)
+		errors.append('An invalid number of heuristics was given: %s' % str(e))
 
 	heuristics = list()
 	for i in xrange(num_heuristics):
@@ -601,37 +600,37 @@
 				(not prefix + 'hscore' in form or not form['hscore'].strip())):
 				# The row is blank; ignore it.
 				continue
-			errors.append('No heuristic name was given for heuristic #' + str(i + 1))
+			errors.append('No heuristic name was given for heuristic #%d' % i + 1)
 
 		try:
 			hpath = form[prefix + 'hpath']
 		except KeyError, e:
-			errors.append('No heuristic path was given for heuristic #' + str(i + 1))
+			errors.append('No heuristic path was given for heuristic #%d' % i + 1)
 
 		try:
 			hint = int(form[prefix + 'hint'])
 			if hint < 1:
-				raise ValueError('Heuristic interval values must be greater than 0.')
+				raise ValueError, 'Heuristic interval values must be greater than 0'
 		except KeyError, e:
-			errors.append('No heuristic interval was given for heuristic #' + str(i + 1))
+			errors.append('No heuristic interval was given for heuristic #%d' % i + 1)
 		except ValueError, e:
-			errors.append('An invalid heuristic interval was given for heuristic #' + str(i + 1) + ': ' + e)
+			errors.append('An invalid heuristic interval was given for heuristic #%d: %s' % (i + 1, str(e)))
 
 		try:
 			hscore = int(form[prefix + 'score'])
 			if hscore < 1:
-				raise ValueError('Heuristic scores must be greater than 0.')
+				raise ValueError, 'Heuristic scores must be greater than 0'
 		except KeyError, e:
-			errors.append('No heuristic score was given for heuristic #' + str(i + 1))
+			errors.append('No heuristic score was given for heuristic #%d' % i + 1)
 		except ValueError, e:
-			errors.append('An invalid heuristic score was given for heuristic #' + str(i + 1) + ': ' + e)
+			errors.append('An invalid heuristic score was given for heuristic #%d: %s' % (i + 1, str(e)))
 		heuristics.append([ hname, hpath, hint, hscore ])
 
 	if len(errors) > 0:
 		return (False, {'errors': errors })
 	return (True, {'messages': ['Changes accepted. - FILL ME IN']})
 
-def validateGeneralConfig(self, form):
+def validateGeneralConfig(model, form):
 	errors = list()
 
 	try:
@@ -655,7 +654,7 @@
 
 	return (True, {'messages': ['Changes accepted. - FILL ME IN']})
 
-def validateFenceConfig(self, form):
+def validateFenceConfig(model, form):
 	errors = list()
 
 	try:
@@ -692,19 +691,33 @@
 	errors = list()
 	messages = list()
 
+	try:
+  		model = request.SESSION.get('model')
+		if not model:
+			raise Exception, 'model is none'
+	except Exception, e:
+		luci_log.debug_verbose('VCC0: unable to get model from session')
+		return (False, {'errors': ['No cluster model was found.']})
+
 	if not 'form' in request:
+		luci_log.debug_verbose('VCC1: no form passed in')
 		return (False, {'errors': ['No form was submitted.']})
+
 	if not 'configtype' in request.form:
+		luci_log.debug_verbose('VCC2: no configtype')
 		return (False, {'errors': ['No configuration type was submitted.']})
+
 	if not request.form['configtype'] in configFormValidators:
+		luci_log.debug_verbose('VCC3: invalid config type: %s' % request.form['configtype'])
 		return (False, {'errors': ['An invalid configuration type was submitted.']})
 
-	val = configFormValidators[request.form['configtype']]
-	ret = val(self, request.form)
+	config_validator = configFormValidators[request.form['configtype']]
+	ret = config_validator(model, request.form)
 
 	retcode = ret[0]
 	if 'errors' in ret[1]:
 		errors.extend(ret[1]['errors'])
+
 	if 'messages' in ret[1]:
 		messages.extend(ret[1]['messages'])
 
@@ -2673,7 +2686,7 @@
     if fencedev.getName().strip() == fencename:
       map = fencedev.getAttributes()
       try:
-        map['pretty_name'] = FenceHandler.FENCE_OPTS[fencedev.getAgentType()]
+        map['pretty_name'] = FENCE_OPTS[fencedev.getAgentType()]
       except Exception, e:
         map['pretty_name'] = fencedev.getAgentType()
 
@@ -2708,7 +2721,7 @@
         for kee in kees:
           fencedev[kee] = attr_hash[kee] #copy attrs over
         try:
-          fencedev['pretty_name'] = FenceHandler.FENCE_OPTS[fd.getAgentType()]
+          fencedev['pretty_name'] = FENCE_OPTS[fd.getAgentType()]
         except Exception, e:
           fencedev['pretty_name'] = fd.getAgentType()
 
--- conga/luci/site/luci/Extensions/conga_constants.py	2006/10/20 20:00:29	1.21
+++ conga/luci/site/luci/Extensions/conga_constants.py	2006/11/03 22:48:15	1.22
@@ -66,6 +66,9 @@
 PATH_TO_PRIVKEY="/var/lib/luci/var/certs/privkey.pem"
 PATH_TO_CACERT="/var/lib/luci/var/certs/cacert.pem"
 
+# Zope DB paths
+CLUSTER_FOLDER_PATH = '/luci/systems/cluster/'
+
 #Node states
 NODE_ACTIVE="0"
 NODE_INACTIVE="1"



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2006-10-25  1:53 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2006-10-25  1:53 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	rmccabe at sourceware.org	2006-10-25 01:53:34

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py ricci_bridge.py 
Added files:
	luci/logs      : Makefile index_html 

Log message:
	more fixes for bz# 211370

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.90&r2=1.90.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/logs/Makefile.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/logs/index_html.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.120.2.5&r2=1.120.2.6
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ricci_bridge.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.30.2.2&r2=1.30.2.3

--- conga/luci/cluster/form-macros	2006/10/16 20:34:37	1.90
+++ conga/luci/cluster/form-macros	2006/10/25 01:53:33	1.90.2.1
@@ -1716,7 +1716,7 @@
 <div metal:define-macro="nodelogs-form">
 	<h2>Recent Log Activity for <span tal:replace="request/nodename"/></h2>
 	<hr/>
-	<span tal:replace="python: here.getLogsForNode(request)"/>
+	<span tal:replace="structure python: here.getLogsForNode(request)"/>
 </div>
 
 <div metal:define-macro="nodeadd-form">
/cvs/cluster/conga/luci/logs/Makefile,v  -->  standard output
revision 1.1.2.1
--- conga/luci/logs/Makefile
+++ -	2006-10-25 01:53:34.925377000 +0000
@@ -0,0 +1,19 @@
+LUCI_HOST=luci
+LUCI_USER=admin
+LUCI_PASS=changeme
+LUCI_FTP=$(LUCI_HOST):8021
+LUCI_HTTP=http://$(LUCI_HOST):8080/luci
+
+all:
+	@true
+
+# import page local page templates to the Luci server
+import:
+	@if test "$(FILES)"; then \
+		../load_site.py -u $(LUCI_USER):$(LUCI_PASS) $(LUCI_HTTP)/logs/ $(FILES) ; \
+	else \
+		find . -follow -maxdepth 1 -type f -not -name "Makefile*" -not -name ".*" -print0 | xargs -0 ../load_site.py -u $(LUCI_USER):$(LUCI_PASS) $(LUCI_HTTP)/logs/ ; \
+	fi
+
+export:
+	@wget -q -r -nH --cut-dirs=2 "ftp://$(LUCI_USER):$(LUCI_PASS)@$(LUCI_FTP)/luci/logs/*"
/cvs/cluster/conga/luci/logs/index_html,v  -->  standard output
revision 1.1.2.1
--- conga/luci/logs/index_html
+++ -	2006-10-25 01:53:35.014028000 +0000
@@ -0,0 +1,84 @@
+<metal:page define-macro="master"><metal:doctype define-slot="doctype"><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"></metal:doctype>
+<metal:block define-slot="top_slot" />
+<metal:block use-macro="here/global_defines/macros/defines" />
+
+<html xmlns="http://www.w3.org/1999/xhtml"
+      xml:lang="en"
+      lang="en"
+      tal:attributes="lang language;
+                      xml:lang language">
+
+  <head metal:use-macro="here/header/macros/html_header">
+
+    <metal:fillbase fill-slot="base">
+      <metal:baseslot define-slot="base">
+        <base href="" tal:attributes="href here/renderBase" />
+      </metal:baseslot>
+    </metal:fillbase>
+
+    <metal:headslot fill-slot="head_slot"
+                    tal:define="lang language;
+                                charset site_properties/default_charset|string:utf-8">
+
+      <metal:cache use-macro="here/global_cache_settings/macros/cacheheaders">
+        Get the global cache headers located in global_cache_settings.
+      </metal:cache>
+
+      <metal:headslot define-slot="head_slot" />
+      <tal:comment replace="nothing"> A slot where you can insert elements in the header from a template </tal:comment>
+    </metal:headslot>
+    
+    <metal:styleslot fill-slot="style_slot">
+      <tal:comment replace="nothing"> A slot where you can insert CSS in the header from a template </tal:comment>
+      <metal:styleslot define-slot="style_slot" />
+    </metal:styleslot>
+
+    <metal:cssslot fill-slot="css_slot">
+      <tal:comment replace="nothing"> This is deprecated, please use style_slot instead. </tal:comment>
+      <metal:cssslot define-slot="css_slot" />
+    </metal:cssslot>
+
+    <metal:javascriptslot fill-slot="javascript_head_slot">
+      <tal:comment replace="nothing"> A slot where you can insert javascript in the header from a template </tal:comment>
+      <metal:javascriptslot define-slot="javascript_head_slot" />
+    </metal:javascriptslot>
+  </head>
+
+  <script type="text/javascript">
+	function delWaitBox() {
+		var waitbox = document.getElementById('waitbox');
+		if (!waitbox)
+			return (-1);
+		waitbox.parentNode.removeChild(waitbox);
+		return (0);
+	}
+  </script>
+
+  <body onLoad="javascript:delWaitBox()"
+		tal:attributes="class here/getSectionFromURL;
+                        dir python:test(isRTL, 'rtl', 'ltr')">
+    <div id="visual-portal-wrapper">
+
+      <div id="portal-top" i18n:domain="plone">
+
+        <div id="portal-header">
+             <a metal:use-macro="here/global_logo/macros/portal_logo">
+               The portal logo, linked to the portal root
+             </a>
+      </div></div></div>
+
+      <div class="visualClear"><!-- --></div>
+
+	  <div id="waitbox">
+		<span>
+			Log information for <span tal:replace="request/nodename | string: host"/> is being retrieved...
+		</span>
+	    <img src="/luci/storage/100wait.gif">
+	  </div>
+
+      <div id="log_data">
+          <span tal:replace="structure python: here.getLogsForNode(request)" />
+      </div>
+</body>
+</html>
+</metal:page>
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2006/10/24 16:36:23	1.120.2.5
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2006/10/25 01:53:34	1.120.2.6
@@ -2398,7 +2398,7 @@
     states = getDaemonStates(rc, dlist)
     infohash['d_states'] = states
 
-  infohash['logurl'] = baseurl + "?pagetype=" + NODE_LOGS + "&nodename=" + nodename + "&clustername=" + clustername
+  infohash['logurl'] = '/luci/logs/?nodename=' + nodename + '&clustername=' + clustername
   return infohash
   #get list of faildoms for node
 
@@ -2434,7 +2434,7 @@
       map['status'] = NODE_INACTIVE
       map['status_str'] = NODE_INACTIVE_STR
 
-    map['logurl'] = baseurl + "?pagetype=" + NODE_LOGS + "&nodename=" + name + "&clustername=" + clustername
+    map['logurl'] = '/luci/logs?nodename=' + name + '&clustername=' + clustername
     #set up URLs for dropdown menu...
     if map['status'] == NODE_ACTIVE:
       map['jl_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_LEAVE_CLUSTER + "&nodename=" + name + "&clustername=" + clustername
@@ -2594,27 +2594,43 @@
 		try:
 			nodename = request.form['nodename']
 		except:
-			return "Unable to resolve node name %s to retrieve logging information" % nodename
+			luci_log.debug_verbose('Unable to get node name to retrieve logging information')
+			return 'Unable to get node name to retrieve logging information'
 
+	clustername = None
 	try:
 		clustername = request['clustername']
 	except KeyError, e:
 		try:
 			clustername = request.form['clusterName']
+			if not clustername:
+				raise
 		except:
-			return "Unable to resolve node name %s to retrieve logging information" % nodename
-
-	try:
-		nodename_resolved = resolve_nodename(self, clustername, nodename)
+			clustername = None
+			luci_log.debug_verbose('Unable to find cluster name while retrieving logging information for %s' % nodename)
 	except:
-		return "Unable to resolve node name %s to retrieve logging information" % nodename
+		pass
+
+	if clustername is None:
+		nodename_resolved = nodename
+	else:
+		try:
+			nodename_resolved = resolve_nodename(self, clustername, nodename)
+		except:
+			luci_log.debug_verbose('Unable to resolve node name %s/%s to retrieve logging information' \
+				% (nodename, clustername))
+			return 'Unable to resolve node name for %s in cluster %s' % (nodename, clustername)
 
 	try:
 		rc = RicciCommunicator(nodename_resolved)
-		if not rc:
-			raise
-	except:
-		return "Unable to resolve node name %s to retrieve logging information" % nodename_resolved
+	except RicciError, e:
+		luci_log.debug_verbose('Ricci error while getting logs for %s: %s' \
+			% (nodename_resolved, str(e)))
+		return 'Ricci error while getting logs for %s' % nodename_resolved
+	except:
+		luci_log.debug_verbose('Unexpected exception while getting logs for %s' \
+			% nodename_resolved)
+		return 'Ricci error while getting logs for %s' % nodename_resolved
 
 	if not rc.authed():
 		try:
@@ -2622,7 +2638,15 @@
 			setNodeFlag(snode, CLUSTER_NODE_NEED_AUTH)
 		except:
 			pass
-		return "Luci is not authenticated to node %s. Please reauthenticate first." % nodename
+
+		if clustername:
+			try:
+				cnode = getClusterNode(self, nodename, clustername)
+				setNodeFlag(cnode, CLUSTER_NODE_NEED_AUTH)
+			except:
+				pass
+
+		return 'Luci is not authenticated to node %s. Please reauthenticate first.' % nodename
 
 	return getNodeLogs(rc)
 
--- conga/luci/site/luci/Extensions/ricci_bridge.py	2006/10/24 16:36:23	1.30.2.2
+++ conga/luci/site/luci/Extensions/ricci_bridge.py	2006/10/25 01:53:34	1.30.2.3
@@ -1,4 +1,5 @@
 import xml
+from time import time, ctime
 from xml.dom import minidom
 from ricci_communicator import RicciCommunicator
 
@@ -284,10 +285,50 @@
 	batch_str = '<module name="log"><request sequence="1254" API_version="1.0"><function_call name="get"><var mutable="false" name="age" type="int" value="18000"/><var mutable="false" name="tags" type="list_str"><listentry value="cluster"/></var></function_call></request></module>'
 
 	ricci_xml = rc.batch_run(batch_str, async=False)
-	doc = getPayload(ricci_xml)
-	if not doc or not doc.firstChild:
+	if not ricci_xml:
 		return errstr
-	return doc.firstChild
+	try:
+		log_entries = ricci_xml.getElementsByTagName('logentry')
+		if not log_entries or len(log_entries) < 1:
+			raise Exception, 'no log data is available.'
+	except Exception, e:
+		'Error retrieving log data from %s: %s' \
+			% (rc.hostname(), str(e))
+		return None
+	time_now = time()
+	entry = ''
+	for i in log_entries:
+		try:
+			log_msg = i.getAttribute('msg')
+		except:
+			log_msg = ''
+
+		if not log_msg:
+			continue
+
+		try:
+			log_age = int(i.getAttribute('age'))
+		except:
+			log_age = 0
+
+		try:
+			log_domain = i.getAttribute('domain')
+		except:
+			log_domain = ''
+
+		try:
+			log_pid = i.getAttribute('pid')
+		except:
+			log_pid = ''
+
+		if log_age:
+			entry += ctime(time_now - log_age) + ' '
+		if log_domain:
+			entry += log_domain
+		if log_pid:
+			entry += '[' + log_pid + ']'
+		entry += ': ' + log_msg + '<br/>'
+	return entry
 
 def nodeReboot(rc):
 	batch_str = '<module name="reboot"><request sequence="111" API_version="1.0"><function_call name="reboot_now"/></request></module>'



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2006-10-25  1:11 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2006-10-25  1:11 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2006-10-25 01:11:09

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py ricci_bridge.py 
Added files:
	luci/logs      : Makefile index_html 

Log message:
	frontend support for log display fixes.
	
	the added index_html file (which is responsible for most of the diff)
	is a stripped down version of the one we display everywhere else, so
	there's really nothing new there.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.90&r2=1.91
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/logs/Makefile.diff?cvsroot=cluster&r1=NONE&r2=1.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/logs/index_html.diff?cvsroot=cluster&r1=NONE&r2=1.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.128&r2=1.129
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ricci_bridge.py.diff?cvsroot=cluster&r1=1.33&r2=1.34

--- conga/luci/cluster/form-macros	2006/10/16 20:34:37	1.90
+++ conga/luci/cluster/form-macros	2006/10/25 01:11:08	1.91
@@ -1716,7 +1716,7 @@
 <div metal:define-macro="nodelogs-form">
 	<h2>Recent Log Activity for <span tal:replace="request/nodename"/></h2>
 	<hr/>
-	<span tal:replace="python: here.getLogsForNode(request)"/>
+	<span tal:replace="structure python: here.getLogsForNode(request)"/>
 </div>
 
 <div metal:define-macro="nodeadd-form">
/cvs/cluster/conga/luci/logs/Makefile,v  -->  standard output
revision 1.1
--- conga/luci/logs/Makefile
+++ -	2006-10-25 01:11:13.257613000 +0000
@@ -0,0 +1,19 @@
+LUCI_HOST=luci
+LUCI_USER=admin
+LUCI_PASS=changeme
+LUCI_FTP=$(LUCI_HOST):8021
+LUCI_HTTP=http://$(LUCI_HOST):8080/luci
+
+all:
+	@true
+
+# import page local page templates to the Luci server
+import:
+	@if test "$(FILES)"; then \
+		../load_site.py -u $(LUCI_USER):$(LUCI_PASS) $(LUCI_HTTP)/logs/ $(FILES) ; \
+	else \
+		find . -follow -maxdepth 1 -type f -not -name "Makefile*" -not -name ".*" -print0 | xargs -0 ../load_site.py -u $(LUCI_USER):$(LUCI_PASS) $(LUCI_HTTP)/logs/ ; \
+	fi
+
+export:
+	@wget -q -r -nH --cut-dirs=2 "ftp://$(LUCI_USER):$(LUCI_PASS)@$(LUCI_FTP)/luci/logs/*"
/cvs/cluster/conga/luci/logs/index_html,v  -->  standard output
revision 1.1
--- conga/luci/logs/index_html
+++ -	2006-10-25 01:11:13.340974000 +0000
@@ -0,0 +1,84 @@
+<metal:page define-macro="master"><metal:doctype define-slot="doctype"><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"></metal:doctype>
+<metal:block define-slot="top_slot" />
+<metal:block use-macro="here/global_defines/macros/defines" />
+
+<html xmlns="http://www.w3.org/1999/xhtml"
+      xml:lang="en"
+      lang="en"
+      tal:attributes="lang language;
+                      xml:lang language">
+
+  <head metal:use-macro="here/header/macros/html_header">
+
+    <metal:fillbase fill-slot="base">
+      <metal:baseslot define-slot="base">
+        <base href="" tal:attributes="href here/renderBase" />
+      </metal:baseslot>
+    </metal:fillbase>
+
+    <metal:headslot fill-slot="head_slot"
+                    tal:define="lang language;
+                                charset site_properties/default_charset|string:utf-8">
+
+      <metal:cache use-macro="here/global_cache_settings/macros/cacheheaders">
+        Get the global cache headers located in global_cache_settings.
+      </metal:cache>
+
+      <metal:headslot define-slot="head_slot" />
+      <tal:comment replace="nothing"> A slot where you can insert elements in the header from a template </tal:comment>
+    </metal:headslot>
+    
+    <metal:styleslot fill-slot="style_slot">
+      <tal:comment replace="nothing"> A slot where you can insert CSS in the header from a template </tal:comment>
+      <metal:styleslot define-slot="style_slot" />
+    </metal:styleslot>
+
+    <metal:cssslot fill-slot="css_slot">
+      <tal:comment replace="nothing"> This is deprecated, please use style_slot instead. </tal:comment>
+      <metal:cssslot define-slot="css_slot" />
+    </metal:cssslot>
+
+    <metal:javascriptslot fill-slot="javascript_head_slot">
+      <tal:comment replace="nothing"> A slot where you can insert javascript in the header from a template </tal:comment>
+      <metal:javascriptslot define-slot="javascript_head_slot" />
+    </metal:javascriptslot>
+  </head>
+
+  <script type="text/javascript">
+	function delWaitBox() {
+		var waitbox = document.getElementById('waitbox');
+		if (!waitbox)
+			return (-1);
+		waitbox.parentNode.removeChild(waitbox);
+		return (0);
+	}
+  </script>
+
+  <body onLoad="javascript:delWaitBox()"
+		tal:attributes="class here/getSectionFromURL;
+                        dir python:test(isRTL, 'rtl', 'ltr')">
+    <div id="visual-portal-wrapper">
+
+      <div id="portal-top" i18n:domain="plone">
+
+        <div id="portal-header">
+             <a metal:use-macro="here/global_logo/macros/portal_logo">
+               The portal logo, linked to the portal root
+             </a>
+      </div></div></div>
+
+      <div class="visualClear"><!-- --></div>
+
+	  <div id="waitbox">
+		<span>
+			Log information for <span tal:replace="request/nodename | string: host"/> is being retrieved...
+		</span>
+	    <img src="/luci/storage/100wait.gif">
+	  </div>
+
+      <div id="log_data">
+          <span tal:replace="structure python: here.getLogsForNode(request)" />
+      </div>
+</body>
+</html>
+</metal:page>
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2006/10/25 00:43:48	1.128
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2006/10/25 01:11:08	1.129
@@ -2398,7 +2398,7 @@
     states = getDaemonStates(rc, dlist)
     infohash['d_states'] = states
 
-  infohash['logurl'] = baseurl + "?pagetype=" + NODE_LOGS + "&nodename=" + nodename + "&clustername=" + clustername
+  infohash['logurl'] = '/luci/logs/?nodename=' + nodename + '&clustername=' + clustername
   return infohash
   #get list of faildoms for node
 
@@ -2434,7 +2434,7 @@
       map['status'] = NODE_INACTIVE
       map['status_str'] = NODE_INACTIVE_STR
 
-    map['logurl'] = baseurl + "?pagetype=" + NODE_LOGS + "&nodename=" + name + "&clustername=" + clustername
+    map['logurl'] = '/luci/logs?nodename=' + name + '&clustername=' + clustername
     #set up URLs for dropdown menu...
     if map['status'] == NODE_ACTIVE:
       map['jl_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_LEAVE_CLUSTER + "&nodename=" + name + "&clustername=" + clustername
--- conga/luci/site/luci/Extensions/ricci_bridge.py	2006/10/25 00:43:48	1.33
+++ conga/luci/site/luci/Extensions/ricci_bridge.py	2006/10/25 01:11:08	1.34
@@ -326,10 +326,8 @@
 		if log_domain:
 			entry += log_domain
 		if log_pid:
-			entry += '[' + log_pid + ']' + ': '
-		else
-			entry += ': '
-		entry += log_msg + '<br/>'
+			entry += '[' + log_pid + ']'
+		entry += ': ' + log_msg + '<br/>'
 	return entry
 
 def nodeReboot(rc):



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2006-10-13 21:25 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2006-10-13 21:25 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2006-10-13 21:25:14

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 

Log message:
	fix for zope brain damage that broke the fence device page

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.83&r2=1.84
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.108&r2=1.109

--- conga/luci/cluster/form-macros	2006/10/13 21:01:59	1.83
+++ conga/luci/cluster/form-macros	2006/10/13 21:25:14	1.84
@@ -1972,7 +1972,7 @@
 		set_page_title('Luci ??? cluster ??? fence devices');
 	</script>
 	<h2>Shared Fence Devices for Cluster: <span tal:content="request/clustername"/></h2>
-  <tal:block tal:define="global fencedevinfo python: here.getFenceInfo(modelb)"/>
+  <tal:block tal:define="global fencedevinfo python: here.getFenceInfo(modelb, None)"/>
 <tal:block tal:define="global fencedevs python: fencedevinfo['fencedevs']"/>
   <span tal:repeat="fencedev fencedevs">
    <h3>Name: <span tal:content="fencedev/name"/></h3>
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2006/10/12 22:11:30	1.108
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2006/10/13 21:25:14	1.109
@@ -2117,7 +2117,7 @@
 
   return resultlist
 
-def getFenceInfo(self, model, request=None):
+def getFenceInfo(self, model, request):
   map = {}
   fencedevs = list() 
   level1 = list()
@@ -2924,6 +2924,35 @@
 	'smb': addSmb
 }
 
+def resolveClusterChanges(self, clusterName, modelb):
+	try:
+		mb_nodes = dict.fromkeys(modelb.getNodes())
+		if not mb_nodes or not len(mb_nodes):
+			raise
+		mb_map = {}
+		for i in iter(mb_nodes):
+			mb_map[i] = i
+	except:
+		return 'Unable to find cluster nodes for ' + clusterName
+
+	try:
+		cluster_node = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/' + clusterName)
+		if not cluster_node:
+			raise
+	except:
+		return 'Unable to find an entry for ' + clusterName + ' in the Luci database.'
+
+	try:
+		db_nodes = cluster_node.objectItems('Folder')
+		if not db_nodes or not len(db_nodes):
+			raise
+		db_map = {}
+		for i in iter(db_nodes):
+			db_map[i[0]] = i[0]
+	except:
+		# Should we just create them all? Can this even happen?
+		return 'Unable to find database entries for any nodes in ' + clusterName
+
 def addResource(self, request, ragent):
 	if not request.form:
 		return (False, {'errors': ['No form was submitted.']})



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2006-08-03 18:36 shuennek
  0 siblings, 0 replies; 39+ messages in thread
From: shuennek @ 2006-08-03 18:36 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	shuennek at sourceware.org	2006-08-03 18:36:21

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 

Log message:
	Fixed problems with adding IP resource, made it check for flags on FS, NFSMounts better. -Stephen Huenneke

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.16&r2=1.17
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.33&r2=1.34

--- conga/luci/cluster/form-macros	2006/08/02 17:27:18	1.16
+++ conga/luci/cluster/form-macros	2006/08/03 18:36:21	1.17
@@ -1,13 +1,13 @@
-<html>
-
-<head>
-	<title tal:content="template/title">The title</title>
-</head>
-
+<html>	
+		
+<head>	
+		<title tal:content="template/title">The title</title>
+</head>	
+	
 <body>
 
 <div metal:define-macro="entry-form">
-	<h2>Entry Form</h2>
+		<h2>Entry Form</h2>
 </div>
 
 <div metal:define-macro="busywaitpage">
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2006/08/03 13:37:39	1.33
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2006/08/03 18:36:21	1.34
@@ -1437,9 +1437,10 @@
   for res in modelb.getResources():
     if res.getName() == name:
           resMap['name'] = res.getName()
-          resMap['type'] = res.resource_type
-          resMap['tag_name'] = res.TAG_NAME
-          resMap['cfgurl'] = baseurl + "?" + "clustername=" + cluname + "&resourcename=" + res.getName() + "&pagetype=" + RESOURCE_CONFIG
+	  resMap['type'] = res.resource_type
+	  resMap['tag_name'] = res.TAG_NAME
+	  resMap['attrs'] = res.attr_hash
+	  resMap['cfgurl'] = baseurl + "?" + "clustername=" + cluname + "&resourcename=" + res.getName() + "&pagetype=" + RESOURCE_CONFIG
           return resMap
 
   return {}
@@ -1487,7 +1488,7 @@
   if not request.form:
     return "Nothing submitted, no changes made."
                                                                                 
-  if not request.form["resourceName"]:
+  if request.form['type'] != 'ip' and  not request.form['resourceName']:
     return "Please enter a name for the resource."
   types = {'ip': addIp,
            'fs': addFs,
@@ -1523,8 +1524,10 @@
   flag = self.restrictedTraverse(objpath)
   flag.manage_addProperty(BATCH_ID,batch_id, "string")
   flag.manage_addProperty(TASKTYPE,RESOURCE_ADD, "string")
-  flag.manage_addProperty(FLAG_DESC,"Creating Resource \'" + request.form['resourceName'] + "\'", "string")
-
+  if type != 'ip':
+	  flag.manage_addProperty(FLAG_DESC,"Creating New Resource \'" + request.form['resourceName'] + "\'", "string")
+  else:
+	  flag.manage_addProperty(FLAG_DESC,"Creating New Resource \'" + res.attr_hash['address'] + "\'", "string")
   response = request.RESPONSE
   response.redirect(request['HTTP_REFERER'] + "&busyfirst=true")
 



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2006-07-21 14:49 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2006-07-21 14:49 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2006-07-21 14:49:47

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 

Log message:
	commiting cluster add node bits before somebody else commits and causes rejects :)

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.6&r2=1.7
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.7&r2=1.8

--- conga/luci/cluster/form-macros	2006/07/20 16:59:33	1.6
+++ conga/luci/cluster/form-macros	2006/07/21 14:49:46	1.7
@@ -22,6 +22,7 @@
    </table>
     
   </div>
+
   <div metal:define-macro="clusters-form">
      <table>
       <tbody>
@@ -65,12 +66,12 @@
       </tbody>
      </table>
   </div>
+
   <div metal:define-macro="cluster-form">
    <h2>Cluster Form</h2>
   </div>
 
 
-
   <div metal:define-macro="clusteradd-form" style="margin-left: 1em">
 	<script type="text/javascript" src="/luci/homebase/homebase_common.js">
 	</script>
@@ -153,7 +154,8 @@
 				<tr class="systemsTable"><td class="systemsTable" colspan="2">
 					<div class="systemsTableTop">
 						<strong>Cluster Name:</strong>
-						<input type="text" id="clusterName" name="clusterName" tal:attributes="value python: sessionObj['requestResults']['clusterName']" />
+						<input type="text" id="clusterName" name="clusterName"
+							tal:attributes="value python: sessionObj['requestResults']['clusterName']" />
 					</div>
 				</td></tr>
 				<tr class="systemsTable">
@@ -175,9 +177,7 @@
 				</td></tr>
 			</tfoot>
 
-			<span tal:omit-tag=""
-				tal:define="global sysNum python: 0"
-			/>
+			<span tal:omit-tag="" tal:define="global sysNum python: 0" />
 
 			<tbody class="systemsTable">
 			<tal:block tal:repeat="node python: sessionObj['requestResults']['nodeList']">
@@ -202,19 +202,16 @@
 								value python: nodeAuth and '[authenticated]' or '';
 								class python: 'hbInputPass' + ('errors' in node and ' error' or '');
 								id python: '__SYSTEM' + str(sysNum) + ':Passwd';
-								name python: '__SYSTEM' + str(sysNum) + ':Passwd';
+								name python: '__SYSTEM' + str(sysNum) + ':Passwd'"
 						/>
 					</td>
 				</tr>
-				<span tal:omit-tag=""
-					tal:define="global sysNum python: sysNum + 1"
-				/>
+				<span tal:omit-tag="" tal:define="global sysNum python: sysNum + 1" />
 			</tal:block>
 			</tbody>
 		</table>
+		<input type="hidden" name="numStorage" tal:attributes="value python: sysNum" />
 
-		<input type="hidden" name="numStorage"
-			tal:attributes="value python: sysNum" />
 		</tal:block>
 
 		<div class="hbSubmit" id="hbSubmit">
@@ -338,8 +335,13 @@
 	<script type="text/javascript" src="/luci/homebase/validate_cluster_add.js">
 	</script>
 
+	<input type="hidden" name="clusterName"
+		tal:attributes="value request/form/clusterName | request/clustername | none"
+	/>
+
 	<form name="adminform" action="" method="post">
-		<input name="numStorage" id="numStorage" type="hidden" value="0" />
+		<input name="numStorage" type="hidden" value="1" />
+		<input name="pagetype" type="hidden" value="15" />
 
 		<h2>Add a Node to a Cluster</h2>
 
@@ -347,10 +349,7 @@
 			<thead class="systemsTable">
 				<tr class="systemsTable"><td class="systemsTable" colspan="2">
 					<div class="systemsTableTop">
-						<strong>Cluster Name</strong>
-						<select class="hbInputSys" id="clusterName" name="clusterList">
-							<option>Fill this in</option>
-						</select>
+						<strong>Cluster Name</strong> <span tal:content="request/form/clusterName | request/clustername | none" />
 					</div>
 				</td></tr>
 				<tr class="systemsTable">
@@ -362,7 +361,9 @@
 			<tfoot class="systemsTable">
 				<tr class="systemsTable"><td colspan="2" class="systemsTable">
 					<div id="allSameDiv">
-						<input type="checkbox" class="allSameCheckBox" name="allSameCheckBox" id="allSameCheckBox" onClick="allPasswdsSame(adminform);"/> Check if cluster node passwords are identical.
+						<input type="checkbox" class="allSameCheckBox"
+							name="allSameCheckBox" id="allSameCheckBox" onClick="allPasswdsSame(adminform);"/>
+						Check if cluster node passwords are identical.
 					</div>
 				</td></tr>
 
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2006/07/20 16:59:33	1.7
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2006/07/21 14:49:47	1.8
@@ -18,10 +18,7 @@
 
 CLUSTER_FOLDER_PATH = '/luci/systems/cluster/'
 
-def validatePost(self, request):
-	if int(request.form['pagetype']) != 6:
-		return
-
+def validateCreateCluster(self, request):
 	errors = list()
 	messages = list()
 	nodeList = list()
@@ -141,6 +138,27 @@
 	messages.append('Creation of cluster \"' + clusterName + '\" has begun')
 	return (True, {'errors': errors, 'messages': messages })
 
+def validateAddClusterNode(self, request):
+	if 'clusterName' in request.form:
+		clusterName = request.form['clusterName']
+	else:
+		return (False, {'errrors': [ 'Cluster name is missing'] })
+
+	return None
+
+formValidator = {
+	6: validateCreateCluster,
+	15: validateAddClusterNode
+}
+
+def validatePost(self, request):
+	pagetype = int(request.form['pagetype'])
+	if not pagetype in formValidators:
+		return None
+	else:
+		return formValidators[pagetype](self, request)
+
+
 def createCluChooser(self, request, systems):
   dummynode = {}
   



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...
@ 2006-07-20 16:59 rmccabe
  0 siblings, 0 replies; 39+ messages in thread
From: rmccabe @ 2006-07-20 16:59 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2006-07-20 16:59:33

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: cluster_adapters.py 

Log message:
	provisionally add clusters to the management interface while deployment is in progress

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.5&r2=1.6
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.6&r2=1.7

--- conga/luci/cluster/form-macros	2006/07/19 20:20:53	1.5
+++ conga/luci/cluster/form-macros	2006/07/20 16:59:33	1.6
@@ -104,7 +104,7 @@
 			<tfoot class="systemsTable">
 				<tr class="systemsTable"><td colspan="2" class="systemsTable">
 					<div>
-						<input type="checkbox" name="allSameCheckBox" id="allSameCheckBox" onClick="allPasswdsSame(adminform);"/> Check if storage system passwords are identical.
+						<input type="checkbox" name="allSameCheckBox" id="allSameCheckBox" onClick="allPasswdsSame(adminform);"/> Check if cluster node passwords are identical.
 					</div>
 				</td></tr>
 
@@ -362,7 +362,7 @@
 			<tfoot class="systemsTable">
 				<tr class="systemsTable"><td colspan="2" class="systemsTable">
 					<div id="allSameDiv">
-						<input type="checkbox" class="allSameCheckBox" name="allSameCheckBox" id="allSameCheckBox" onClick="allPasswdsSame(adminform);"/> Check if storage system passwords are identical.
+						<input type="checkbox" class="allSameCheckBox" name="allSameCheckBox" id="allSameCheckBox" onClick="allPasswdsSame(adminform);"/> Check if cluster node passwords are identical.
 					</div>
 				</td></tr>
 
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2006/07/19 22:28:17	1.6
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2006/07/20 16:59:33	1.7
@@ -14,11 +14,11 @@
 #then only display chooser if the current user has 
 #permissions on at least one. If the user is admin, show ALL clusters
 
-from homebase_adapters import nodeAuth, nodeUnauth                     
+from homebase_adapters import nodeAuth, nodeUnauth, manageCluster
 
 CLUSTER_FOLDER_PATH = '/luci/systems/cluster/'
 
-def validatePost(request):
+def validatePost(self, request):
 	if int(request.form['pagetype']) != 6:
 		return
 
@@ -121,6 +121,13 @@
 			errors.append('Unable to generate cluster creation ricci command')
 			return (False, {'errors': errors, 'requestResults':cluster_properties })
 
+		error = manageCluster(self, clusterName, nodeList)
+		if error:
+			nodeUnauth(nodeList)
+			cluster_properties['isComplete'] = False
+			errors.append(error)
+			return (False, {'errors': errors, 'requestResults':cluster_properties })
+
 		for i in nodeList:
 			try:
 				rc = RicciCommunicator(i['ricci_host'])
@@ -138,7 +145,7 @@
   dummynode = {}
   
   if request.REQUEST_METHOD == 'POST':
-    ret = validatePost(request)
+    ret = validatePost(self, request)
     try:
 		request.SESSION.set('checkRet', ret[1])
     except:



^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2007-09-21  3:11 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-02-07 17:02 [Cluster-devel] conga/luci cluster/form-macros site/luci/Exten rmccabe
  -- strict thread matches above, loose matches on Subject: below --
2007-09-21  3:11 rmccabe
2007-06-19 15:54 rmccabe
2007-05-03 20:16 rmccabe
2007-03-15 16:41 rmccabe
2007-03-14 22:38 rmccabe
2007-03-14 22:37 rmccabe
2007-03-05 16:50 rmccabe
2007-03-05 16:50 rmccabe
2007-03-05 16:49 rmccabe
2007-02-15 22:44 rmccabe
2007-02-08  3:46 rmccabe
2007-02-07 16:55 rmccabe
2007-02-02  4:34 rmccabe
2007-02-02  0:11 rmccabe
2007-02-01 20:49 rmccabe
2007-01-31 23:36 rmccabe
2007-01-31  5:26 rmccabe
2007-01-23 13:53 rmccabe
2007-01-15 18:21 rmccabe
2007-01-11 19:11 rmccabe
2007-01-10 21:40 rmccabe
2007-01-06  3:29 rmccabe
2006-12-14 23:14 rmccabe
2006-12-14 18:22 rmccabe
2006-12-11 22:42 rmccabe
2006-12-11 21:51 rmccabe
2006-12-06 22:11 rmccabe
2006-12-06 21:16 rmccabe
2006-11-13 21:40 rmccabe
2006-11-12  2:10 rmccabe
2006-11-09 20:32 rmccabe
2006-11-03 22:48 rmccabe
2006-10-25  1:53 rmccabe
2006-10-25  1:11 rmccabe
2006-10-13 21:25 rmccabe
2006-08-03 18:36 shuennek
2006-07-21 14:49 rmccabe
2006-07-20 16:59 rmccabe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.