UNIVERSITY OF THE WESTERN CAPE

Department of Computer Science

Department of Computer Science

2017 Honours Projects

************************************************************************************************************************************************************************************

BANG Projects (Prof Tucker).

Zenzeleni - a solar-powered and wireless rural community network. Click here for more info.

Available projects

Billing services for data and voice.

We have recently added low cost high speed Internet VoIP break in and out services to a rural community-run wireless mesh network.  The network is managed by a co-operative in Mankosi and is physically placed in people's homes. We are using a homegrown prepaid billing system for VoIP co-designed with the community which uses Interactive Voice Response (IVR) and A2Billing.. We need to add additional functionality to bill various forms of data usage in addition to voice. Note that we also bill for charging mobile phones at solar charging spots. Ideally, an integrated billing platform needs to be worked out with the community: there will be different types of users, e.g. community residents and entities like a backpackers, secondary (junior and senior) school staff and students, clinic and NGO. The project requires understanding the current system, examining similar systems available from the Internet and coming up with a plan to improve functionality and ease of use. The former requires automated function, unit and integration testing, and the latter requires user-centred usability testing.

Field validation tools.

Because Zenzeleni Mankosi is so remote, it is difficult to debug network problems and bottlenecks remotely. We need apps that run on mobile phones to use in the field to debug issues such as signal strength, data throughput, packet loss, jitter (variations in delay) amongst other problems. A couple years ago, we built a mobile app that mimics the functionality of D-ITG to conduct simple Quality of Service (QoS) tests by simulating traffic flows. We would like to build on this work by providing a mobile tool that can operate in the field. Of course, we would want to test the app in a local laboratory testbed (which we have in operation in the BANG lab). The project requires understanding the mobile D-ITG app that we already have, and also looks at mobile field validation tools available online in order to cull ideas to improve functionality and ease of use. The former requires automated function, unit and integration testing, and the latter requires user-centred usability testing.

Simulation of wireless mesh networks in ns-3 or some other simulator.

There are mainly three drivers that allow implementation of wireless mesh networks: madwifi, ath5k and ath9k. This project aims to obtain a good understanding of real life implementations of mesh mode, and then review the wireless mesh (adhoc) modules in ns-3, providing, when necessary, patches to allow simulation of the mesh modes with the three different drivers mentioned. This should allow one to configure quality of service (QoS) according to the implementations studied and/or created above. Once a robust and trusted mesh module exists for ns-3, simulations can be carried out to measure performance of a mesh network, and results will be compared with those obtained in a real network. This project can be restricted, e.g. for an Honours, by focusing on a single driver. An MSc would compare drivers.

Back-end traffic shaping and prioritisation/Front-end user information.

Given the recent addition of low cost broadband to Zenzeleni Mankosi, we expect an explosion in the use of WiFi-enabled phones on the network. In other words, we expect to add 10's if not 100's of WiFi-enabled smart and not-so-smart phones onto the mesh network in infrastructure mode, for use with data and with VoIP. This project endeavors to understand community members' usage and preferences and prepares a tool that allows shaping and prioritising of traffic accordingly, from the perspective of the routers themselves. This project’s technical aspects delve into traffic shaping and prioritisation, e.g. exploring IntServ and DiffServ approaches for the wireless mesh network, which is currently based on BATMAN-adv; although we are looking toward LibreMesh for the near future, so work with the latter is encouraged. On the other hand, a front-end user-centred approach aims to inform the end-user about which network to use to make a call or surf to a site, based on network conditions and availability, e.g. a choice between GMS/2G/3G and WiFi from the mesh network, or if the mesh network is at capacity, rather refer the user to an available mobile data or GSM connection. This end user app must include a cost analysis, i.e. informing the user about the most cost-efficient and/or battery-efficient way to do what the user wants to do; thus empowering the user to make an informed choice (which, of course, could be automated with appropriate settings).

Work in progress

Mobile battery usage comparison.

Compare cell phones using mobile data vs. Wi-Fi to do the same things, e.g. WhatsApp, Facebook, email and voice over Internet Protocol (VoIP). This project requires a piece of software to drive the exact same application usage on the same phones in order to compare the battery consumption of mobile data vs. Wi-Fi; with GSM on and off. The hypothesis: Wi-Fi consumes less battery than 3G for the same services. This is in progress by Shree Om (PhD student), Dr Carlos Rey-Moreno and Prof. Blignaut (Statistics).

Impact of WiFi clients on mesh networks: scalability and quality of service.

 PhD Computer Science. Shree Om (supervised by Tucker) aims to explore scalablity of nodes, clients and the number of calls for a village telco network running batman-adv with a testbed at the university with an eye toward deployment in the field.

Improving performance in mesh networks.

MSc. Taha Abdalla (supervised by Bagula) aims to reduce the number of OGMs (originator messages) used by batman-adv to spread the routing table throughout the mesh network.

Recently finished projects that can be continued

Community Telco: an acceptable solution for providing affordable communications in rural areas of South Africa.

PhD. Dr Carlos-Rey Moreno (now a post-doctoral fellow) collected data in Mankosi to examine the acceptability of Zenzeleni in terms of technical, social, financial and legal concerns. The result is South Africa's 1st and only legally run rural community-owned ISP. We need to collect more data to examine the impact of Zenzeleni in this area and surrounding areas.

Trust and e-billing for voice services on a rural community mesh network.

MSc. Josee Ufitamahoro (supervised by Venter, Tucker) conducted participatory design with the local community to design a billing system for voice services. A prototype was implemented with A2Billing and is currently in use in Mankosi by Zenzeleni Network, a not-for-profit community cooperative.

Traffic generation and analysis on a mobile device.

 MSc cum laude. Ghislaine Livie Ngangom Tiemeni (supervised by Venter, Tucker) implemented a prototype of D-ITG on a mobile phone to enable mobile devices to generate and analyse traffic over a wireless network.

SignSupport - a  mobile communication app for Deaf people.

Available projects

Authoring tool for SignSupport, a rich communication service.

SignSupport is a mobile app that helps Deaf people communicate with hearing people who cannot sign. We currently have three scenarios for SignSupport: a visit to the pharmacy, computer literacy training and diabetes self-management information. We have built a prototype of an authoring tool for SignSupport that an informed end user can use to produce any of these scenarios, and in fact, create their own, e.g. to report a crime at the police station. The problem with our prototype is that it generates output that must be consumed by another app to render the user interface (for the Deaf user). We want to bypass that stage and write the app directly to HTML (more likely HTML5) so that the scenario (output) app can run on any device, in any browser. It would be useful to read up on SignSupport, esp. Sifiso Duma's thesis before taking on this project.

Authoring tool for ODK.

We have built a signed language video interface to ODK. Open Data Kit (see https://opendatakit.org/) is a mobile data collection tool out of the University of Washington (UW); and we are lucky to actually know the people (Carl and Yaw) that started this project at UW, and continue to run it as a business called Nafundi. The main input to ODK in order to render a mobile form/questionnaire is a spreadsheet full of questions (and answers/options). This project builds on our instrumenting ODK with signed language videos, to collect data from Deaf people, by crafting an authoring tool with a graphical front end that produces the Excel xls that ODK consumes. The tool is intended to be used by Deaf people who can create their own questionnaires with signed language videos. In other words, instead of having to learn Excel, this tool enables a user to populate an ODK form with a more intuitive graphical user interface, signed language videos, that get turned into text, and then populate the Excel spreadsheet that ODK expects. This will be especially helpful when creating signed language interfaces for ODK forms,  by Deaf signers and/or people working with a signed language interpreter.

Sign language video frame grabber

Is intended for a sign language user to select a key frame from a sign language video that ‘represents’ the meaning of a video. The intention is to use that selected frame as an icon for the sign language video in an application, perhaps to identify a button or a selection, or to play the full video to clarify the content. In other words, in an app, instead of playing the entire sign language video, we rather allow a sign language speaker to select a frame from the video for that video. A more interesting angle to this would be for the sign language speaker to select several key frames, and animate them in a gif that will suggest more completely the meaning of the clip to a sign language speaker.

Evaluation of SignSupport on low-end smartphones.

We have developed and tested the current SignSupport prototype on mid-range smart phones. We know the prototype will work well on later versions of Android. However, we’d like to know if Deaf users will be able to use SignSupport on low-end phones running slightly older versions of Android. This project requires porting SignSupport to such phones and testing them out with Deaf users in order to compare users’ impressions of the application and its performance. This mostly involves the evaluation of sign language video intelligibility, for which  guidelines exist. This project can also involve modification to the way SignSupport handles and stores videos in order to improve performance on lower end phones.

Android bandwidth pricing calculator.

In 2012, we built a mobile packet monitor front end and back end to collect and visualise the data consumed by various applications running on an Android device. This project carries that prototype further, possibly based on other tools like Android’s Data usage app, adding support for multiple data plans, a wider array of price visualizations, support for multiple SSID’s on the WiFi interface and calculations to help users decide if they should make a GSM call, VoIP or some sort of breakout, e.g. SkypeOut.

Work in progress

Authoring tool for SignSupport.

MSc. Sifiso Duma (Tucker) Thus far we have designed and built three versions of SignSupport for a Deaf person: 1) visiting a doctor, 2) visiting a pharmacist and 3) assisting with ICDL training. SignSupport essentially presents a scripted conversation flow between a Deaf person and a hearing person around a specific scenario. There is no automatic translation. All of the potential sign language videos are stored on the phone. We are generalising the tool to be able to define conversation flows for any given conversation scenario, e.g. visiting the police station, home affairs or library. We would like each scenario to be crafted and loaded onto the phone individually, depending on need. The authoring tool is meant to help domain specialists construct the conversation flow and to aid in populating the context with recorded sign language videos and related text and icons. We have a prototype that needs improvement.

Signed language interface for ODK.

Contract programmer (YY Wang, an MSc Computer Science graduate working in local software engineering company) does exactly what the title suggests: provides signed language interfaces, in this case South African Sign Language (SASL) to collect data from Deaf end users with ODK. In addition to populating the interface with signed language questions, and answers (in addition to iconic answers), the enhancement includes call outs to transcription tools such that a SASL interpreter can populate otherwise text-based data for standard ODK tools.

Mobile video relay and security.

PhD. Andre Henney (Tucker). This project integrates a real-time mobile relay system based on the MobileASL codec to SignSupport. This app can be invoked on a mobile device when a Deaf person requires interpretation to clarify information surrounding any given SignSupport scenario, and relies on a remote sign language interpreter. This project also addresses the privacy and security of the interpretation service in the context of the South African protection of private information (POPI) bill.

SignSupport field clinical trial in an actual pharmacy.

PhD (Pharmacy) Mariam Parker (Bheekie, Tucker) needs to obtain ethics clearance from a national board in order to conduct clinical trials. The goal will be to piggy back on another such effort, e.g. one for diabetes, and also learn whether or not we must adhere SignSupport to telemedicine specifications wrt technical specs like framerate, video size, etc.

SignSupport for diabetes information.

PhD (Industrial Design Engineering, TU Delft) Prangnat Chininthorn (Diehl, Tucker). Continuing on her work designing the pharmacy version of SignSupport for her MSc, Prang will look at the best ways to provide diabetes information to Deaf users according to Deaf users' needs articulated by Deaf people.

Recently finished projects that can be continued

SignSupport mock clinical trial in an actual pharmacy.

PhD (Pharmacy) Mariam Parker (Bheekie, Tucker) is trialled the output of Michael Motlhabi’s version of SignSupport in an actual pharmacy with Deaf participants in Paarl. However, due to ethics constraints, the Deaf participants were not allowed to use SignSupport in conjunction with actual medications.

Video notification for SignSupport.

For the pharmacy SignSupport scenario, we need to add a video notification, containing a sign language video, within the SignSupport application, to remind a Deaf user when to take a given medication. The notification has two parts: first, a picture of the medication, and second, a sign language video that informs the user how to take it. The reminder system needs to be automatically configured and set into motion by the SignSupport application once a pharmacist adds a prescription. The system needs to be able to handle multiple alerts at the same time. It also needs to have a link from the reminder back to the medicine’s entry in the SignSupport system. Another task is to increase the intensity of the phone’s vibration for alerts. A useful add-on would be to track user compliance of actually taking the medication.

Pattern passcode for SignSupport.

In 2013, Duma modified the standard Android pattern passcode to allow a user to use a point more than once to enable more complicated patterns with repeatable points. A reused point is colour-coded to how many times it is used. Modifications in 2014 by Bulumko Matshoba include: removing the advance button, adding a pattern reset, enabling the user to lift the finger and skip to another location for disjunctive patterns, and a formal test to see if pattern passcodes really work better with Deaf users than textual passwords and PINs. The main reason why we are interested in pattern passcodes is that many Deaf users are minimally textually literate and since they communicate in signed language, they will prefer visual passcodes. Thus the pattern reset is not as straightforward as a text password reset with email.

Sign language-based data collection

With ODK (Open Data Kit) augmented text and paper-based questionnaires with signed language videos. The problem was that we have to have the text-based questionnaires interpreted on the spot. This limits the data collection process because if we have 2 interpreters, we can only interview 2 Deaf people at a time. With a sign language enhanced system, we can collect yes/no and multiple-choice answers, and even free form answers in sign language to be interpreted later. All of the questions are asked in South African Sign Language ont the device. The existing prototype, developed by Sibusio Sibiya, can be improved and tested out with Deaf end users.

************************************************************************************************************************************************************************************

Internet-of-Things in Motion (Prof Bagula).

The use of drones is justified in situations where the task to be performed is too dangerous, expensive or difficult to be performed by humans or, if not too difficult, can in any case be performed more cheaply and/or efficiently. There are many such applications which are the subject of ongoing research and development. Target search is a common application for the purposes of rescue, monitoring or destruction. Another popular application is that of area coverage or exploration for multiple purposes such as environment mapping, surveillance, sensor deployment, acting as communications hubs for immobile wireless sensor networks or aerobiological sampling. These and other research works mention applications such as weather forecasting, fire detection and observation (in both urban and rural environments), smart parking, environmental monitoring and clean-up in smart cities, space exploration, traffic surveillance, logistics in warehouses and factories, agricultural monitoring and interior surveillance of buildings. The Internet-of-Things in motion (IoT-im) targets collaboration between ground-based sensor networks and a team of UAVs to support new and unforeseen applications. This project objective is to explore the IoT-im concepts and techniques for building a framework targeting smart parking and pollution monitoring for lso continue to surface.

Number of students: Maximum 3

  • Tasks:
    • Task1: Smart Agriculture
      • (a) Image Processing
      • (b) Sensor networking
      • (c) Robotics
    • Task2: Pollution Monitoring
      • (a) Image Processing
      • (b) Sensor networking
      • (c) Robotics
    • Task3: Smart Parking
      • (a) Image Processing
      • (b) Sensor networking
      • (c) Robotics

************************************************************************************************************************************************************************************

Projects by Prof Venter.

eQueue

Long queues are the norm rather than the exception at banks, home affairs offices, hospitals, etc. Most clients, citizens and patients feel they wait far too long in queues and are wasting time that could have been better spent doing something else. In some instances, pricing is used to eliminate queuing congestion – allowing clients who paid more, access to a priority queue (Pearce, 2016) however, this is often not perceived as being “fair”. According to Dickson et al. There are several ways to address the frustration of queuing. The first is to manage the queue and the actual time spent in the queue by matching capacity with customer demand. The second is to deal with the perception of customers about the wait. The third is a virtual queue which will eliminate real “waiting” as it will allow customers be busy with other activities while they wait for an appointed time (Dickson, Ford, & Laval, 2005). Thus an electronic or virtual queue could, whilst still adhering to the principle of First-Come-First-Served, alleviate the unnecessary waiting in physical queues (Pambuka, 2016).

The purpose of this project is to design and develop a mobile queueing management system that will virtually organise and manage queues. The research questions that will be investigated are: What type of interface will be best suited for such a queuing application? How should the virtual and physical queues be integrated not to frustrate customers? How can changes in the time of appointment be accommodated? How should the queuing system be adapted to fit different circumstances for example how to handle customers who return for service several times whilst within the system (such as patients in a hospital) (Yom-Tov & Mandelbaum, 2014).

The developed app will:

  • Determine from the customer’s position, how long it will take them to join the physical queue
  • Send push notifications to inform customer when it his/her turn – taking into account where the customer currently is.
  • Allow customers to rate the service
  • Allow customers to change their position in the queue if they see they run late.

 

eShop: the electronic store

According to the World Economic Forum we stand on the brink of the fourth industrial revolution, a technological revolution that will fundamentally change the way we live and work and it will force companies to re-examine the way they do business. Some of the drivers of this revolution is the fact that: we are able to connect to anyone, from anywhere, on any device; the number of people that can be reached is limitless (it is scalable); billions of people are connected by mobile devices and our access to knowledge, is unlimited (Schwab, 2016). According to Dai et al. online platforms are growing at three times the rate of brick and mortar stores (Dai, Hoffmann, & Lannes, 2012). However, online shopping, according to Schultz & Block (2015), is very specific to country and product and has surprisingly shown a decline in recent years (Schultz & Block, 2015).

In this project, several South African on-line shopping apps will be considered in terms of organisation and interface with the aim to develop an app that addresses some of perceived deficiencies. The research questions that will be investigated are: What type of interface would attract customers? How do customers search for products online? Are these methods similar to or different from the search patterns of shopping with brick and mortar retailers in South Africa?

The developed app will be able to:

  • Send push notifications
  • Rate service
  • Keep track of orders
  • Allow searches
  • Request to view an order at a viewing store
  • Notification that specific order is in viewing store.
  • And more (as is indicated by potential customers)

************************************************************************************************************************************************************************************

SASL Projects (Mr Ghaziasgar).

Depth Inference for Hand Tracking

Automatically tracking a hand in a (single) camera feed is a very well-established field of research, with a variety of techniques proposed and used to successfully track one or both hands in a video as they move. The use of a single camera provides only a single 2D view of the scene, meaning that it is not realistically possible to determine the how far (depth) the hand(s) are from the camera in the image at any time. On the other hand, a large body of research has dealt with the problem of determining the depth of objects using 2 cameras capturing the same scene at the same time. The problem tackled by this project is to apply depth estimation using 2 cameras to hand tracking. The project would first involve re-implementing one of the existing hand tracking strategies that are documented. Then, it would involve re-implementing on of the existing object depth estimation strategies that are documented. Then, with the position of the hand known, it would be possible to get the depth of the hand.

Digitised Note-Taking and Patient Management System for Dentists

Most dentists currently use a file-based and paper-based system of organising their patients' information. This is obviously very inefficient and ineffective. This project will involve creating a comprehensive note-taking and patient management suite for use by dentists. It will involve creating a suitable back-end and a very polished app-based and/or web-based front-end.

Facial Expression Recognition for Customer Satisfaction

Customer satisfaction is usually important to most companies. It may be a little bit difficult to accurately gauge satisfaction, with most companies relying on customers to answer questionnaires and/or actively come forward with criticism. This project aims to make use of facial expression recognition techniques to attempt to automatically gauge satisfaction. Facial expression recognition techniques are very well-established and the student will be required to first familiarize him/herself with these techniques, and then apply them.

Physical Book Search Assistant

With the introduction of eBooks, searching a book for a desired word/phrase has become very easy. With the touch of a few keys i.e. <Ctrl-F>, <Type in Search Phrase> and <Enter>, the user can quickly find the word/phrase he/she is looking for. This luxury does not extend to physical books. While a glossary goes a long way towards addressing this problem, it is not nearly as effective or convenient as being able to search in an eBook. This project entails the following: a physical book is placed on a viewing surface; a camera (most likely mounted in a fixed face-down position) is pointed towards the pages of the book and continuously captures the scene; the system then analyses the images captured as the user pages through the book using Original Character Recognition (OCR) techniques to locate a desired word/phrase in the book. The desired word/phrase can be indicated on the computer screen for the user to see. Since book scanning and OCR are very well-established fields, this would require the student to familiarise him/herself with these techniques and apply them to this task.

Automatic Baby Monitor

There are currently a very wide array of baby monitoring tools in the market place. Some of these take the form of just a simple camera that points towards the baby and allows the parent to act as the "intelligence" of the system by constantly watching the baby through a remote monitor. Some of these provide a few extra features such as automatically detecting motion in the room, monitoring for noise, detecting the baby's cries, monitoring room temperature etc. In all these cases, these features help assist the parent in detecting an event that requires attention. This project entails developing a system using a web camera, ideally attached to a light-weight computing device such as a raspberry pi, that captures the scene and transmits it to a custom-made android application with which the parent can tap into the video feed, as well as receive notifications. The application would also include some motion detection and other such features.

Automatic Puzzle Solver

As children, we may have all played with puzzles, pouring the pieces onto the table, and then systematically (and sometimes haphazardly) going about figuring out which piece fits where. This project aims to write a system that does the following: with the pieces of a puzzle (starting with a very simple puzzle of very few pieces at first) placed all face up on the table, the system captures a picture of those pieces and then proceeds to figure out which piece fits where to solve the puzzle.

Interactive Sudoku Solver

This project entails (at least in my mind, feel free to mould and mend) creating a game in which a camera is pointed towards a sudoku grid and the computer program "monitors" the game as the user physically puts in numbers and, at the request of the user, can give 'hints' i.e. highlight wrong entries, provide the solution to n squares etc.

Interactive Visual Tic-Tac-Toe

This project is somewhat similar to the interactive sudoku solver. A camera is pointed towards a tic tac toe grid drawn by the user. Thereafter, the user and computer take turns making moves on the grid. The player's choices are drawn physically on the grid, whereas the computer's moves are painted onto the grid virtually by superimposing them on the image on the screen.

Visual Plagiarism Detector

The majority of applications aiming towards detecting plagiarism dig deep, performing matching at the text and semantic level. Doing so can be very complex and computationally heavy, especially in the case of code plagiarism. This project aims to investigate whether taking a more general 'visual' approach can be just as, if not more, effective. The aim is to determine whether finding matches between a set of documents/code by comparing them as images (rather than text) can help detect similarities and, hence, plagiarism. If shown to be workable, this can be a very helpful tool in undergrad practicals.

Audio Gesture Recognition

Have you ever placed your ear on the wall and then tried to draw shapes on the wall with your finger nails? The sound made by each type of shape can be very distinctive. This project aims to do something similar to this. Attach one (or more) microphones onto the wall. Then, as he user draws gestures onto the wall, the audio gesture recognition program detects and recognizes these gestures. This can be used, for example, as a means of remotely carrying out various actions such as start/stop/pause of the media player etc.

Leap Motion Gesture Recognition

The leap motion is a device that can track a user's hands and determine and provide the pose of the hands in real-time. This project aims to take the information provided by the leap motion and train a classifier to recognize various handshapes in various orientations and locations towards recognizing unique gestures.

************************************************************************************************************************************************************************************

Projects by Mr A Ismail.

Fixed wing drone Project

In this project a team of unmanned aerial vehicles (UAVs) is assigned a task which may include, (1) monitoring of traffic, fire, etc., (2) search and rescue mission, (3) collection of data, or any other suitable application. The objective is for the team of UAVs to perform the task efficiently given a set of constraints.

Two honours students are required for this project.

The project can be divided into two parts:

  1. The objective of the first part of the project is to achieve autonomous flight using fixed wing UAVs. Various aspects will be explored during experimentation. Initially, using GPS coordinates when implementing autonomous flight and then to adjust its flight using image recognition using images of the terrain where the task is performed. Avoiding collisions with other UAVs and objects in the terrain will also be explored.
  2. The second part of the project will primarily deal with the task to be executed by the team of UAVs. The task may involve terrain monitoring, collection of data from sensors distributed in the terrain, search and rescue mission, or any other suitable application. A strong grasp of networks will be required. Various configurations for collecting and exchanging the data will be explored. The goal is to assign a team of UAVs to perform the task efficiently. (Prof. Bagula co-suprvisor)

 If you are interested in any of the projects above, then feel free to contact me on This email address is being protected from spambots. You need JavaScript enabled to view it.. Note, students will be assigned to the project on a “First come, first served” basis.

Languages and/or other Software: Java or Matlab

Genetically (or PSO) programmed checker player

A program based on an evolved neural network was developed to play checker at human-player level. A description of the program appears in the September 1999 issue of the Proceedings of the IEEE journal. The evolved neural network is the result of an evolutionary algorithm that taught itself to play checkers using only the position of the pieces on the board and the piece differential. No other human expertise in the form of "features" such as mobility, control of the center, etc. were used. The program has evolved to play at a level that is competitive with human experts, as verified in games on the Internet at http://www.zone.com.

The objective of this project is to replicate (and perhaps improve on) the work described above using a computer program evolved with a genetic programming system instead of a neural network.
Languages and/or other Software: any programming language.

An Evolutionary Approach to Cutting Stock Problems (CSPs)

The aim of CSP is to cut an object made of material that can be a reel of wire, paper, or a piece of wood, etc., to satisfy customers' demands. The material is referred to as the stock of ``(large) objects'' and the list of demands is so-called ``(small) items''. If there is more than one stock length to be cut to fulfill the requests, the problem is called a multiple stock length CSP. The objective is to minimize the wastage of cutting.

************************************************************************************************************************************************************************************

Software Engineering Projects (Dr M Norman).

A Software Configuration Management (SCM) System

Project Description: When building computer software (e.g. prototypes) and when it is released (e.g. version status), changes often occur due to the removal of defects and with the addition of new functionality as examples. These changes in software need to be managed and controlled effectively. Software Configuration Management (SCM) is a set of activities designed to control change be identifying a number of parameters, which are relevant e.g. managing different versions. This project will implement these identified activities which will assist software / project managers in controlling and managing change.

Software to support Reviews and Inspections

Project Description: Software Quality Assurance (SQA) encompasses, amongst other aspects, Formal Technical Reviews (FTR) and Inspections. A FTR is a SQA activity performed by software engineers and others. The FTR has a number of objectives (e.g. uncover errors, meets requirements, etc) and is in essence a class of reviews that includes walkthroughs, inspections, round-robin reviews and other technical assessments. This project will implement these identified activities which will assist software / project managers in ensuring that software quality is implemented.

Automating aspects / phases of the Software Development Life-cycle (e.g. a DFD editor, Use-case Tool, Class design tool, etc)

Project Description: When you build a software product, you go through a series of predictable steps – a process that helps you create a timely, high-quality work product. This process (with its methods and tools) provides stability, control and organization to an activity that can become uncontrolled if not implemented in a disciplined approach. These approaches are called Software Process Models or the Software Development Life-Cycle (SDLC). There are number of phases and aspects to the SDLC e.g. analysis, design, implement, test, etc. This project will implement an aspect of the SDLC which will assist software engineers / developers in the software development process.

************************************************************************************************************************************************************************************

Projects in conjunction with the CSIR (Dr M Norman).

Wi-Fi hacking with a Raspberry Pi and a Drone

In today’s world it is easy to create a Wi-Fi access point using multiple devices such as cell phone hot spots found on Android, iOS, Blackberry, etc. Raspberry Pi s are becoming popular and cheap with features such as being Wi-Fi enabled, Bluetooth enabled, Ethernet enabled, etc. Power the Raspberry Pi using a Power Bank. The aim of this project is to building a tool on a Raspberry Pi that can automatically scan and attempt to connect to Wi-Fi networks with weak encryptions. This Raspberry Pi will be placed on top of a Drone and flown around campus. The aim of this project is to educate the candidate on the importance of strong password encryption especially for Wi-Fi networks. Step 1: The candidate should build a tool on a Raspberry Pi capable of connecting to Wi-Fi networks with basic Wi-Fi encryption key enabled on them. This connection attempt can be brute force or using rainbow tables or other technique of choice. The candidate should also build a Command and Control to store the list of Wi-Fi networks that were successfully connected to using the tool. Step 2: Using a phone, the candidate should setup dummy Wi-Fi hotspots using different encryption options and test that the hacking tool works as designed in step 1. Step 3: Mount the Raspberry Pi onto a drone and fly it around campus. When a new Wi-Fi network is encountered the Raspberry Pi should attempt to connect to the Wi-Fi network. Once the connection is successful the Raspberry Pi should send information back to Command & Control. The Raspberry Pi should attempt to connect to the network either using brute force or rainbow table or other technique of the choice. Step 4: Document findings and notify the owners of the Wi-Fi networks that were found to have weak encryption keys.

 Free USBs with Malware

The idea of this project is to educate students and staff about Cyber Security awareness and the ease at which cyber-attacks can be launched. Step 1: The candidate will have to build a tracking tool using a platform of their choice and implemented a Command & Control machine with a database. The tool should automatically run on any operating system such as Mac OS X, Windows, Linux, etc. Step 2: Load the tool and dummy files & folders on USB drives. Example of the files or folders can be “exam scope”, or “personal information” or “CV”. Drop as many USB drives as possible around campus. Note down the location where the USBs were dropped. If possible take pictures from a distance of people picking up these USBs. Step 3: When the victims plug these USB drives into machines, the tracking tool should be able to gather basic information about the victim’s machine such as the operating system, version, last boot, numbers of hard disk drives, whether the machine is a public machine or personal machine. All this must be done as quickly as possible. Step 4: Track which files or folders on the USB drive the users clicked on. How long it took the user to format the USB. Research what other kind of attackers can be used this way. Step 5: Notify the victims that they were anonymously part of the project. Get them to fill in a survey.

Snooping IoT Devices with Raspberry Pi

Raspberry PIs are becoming popular and cheap with features such as being Wi-Fi enabled, Bluetooth enabled, Ethernet enabled, etc. The aim of this project is to have the candidate build an IoT snooping tool on a Raspberry Pi and track how many IoT devices the candidate comes into contact with as they walk around campus. Step1: Build an IoT snooping tool on a Raspberry Pi. Build a Command & Control machine which will communicate with the Raspberry Pi and keep track of information that is found. Step 2: Power the Raspberry Pi using a Power Bank. Step 3: Test and confirm that the snooping tool functions as designed. Step 4: The candidate should walk around campus especially in busy areas such as the Student Centre during lunch or lecture halls. The snooping tool should get the name of the device and send the information back to the Command & Control machine. The Command & Control machine should keep track of how many distinct devices of a certain type were found. If possible remove duplicates. Step 5: Document findings and educate students around campus.

 

********************************************************************************************************************************************************************