About us

Providing consultancy across the Artificial Intelligence field.

Founded in 2008, we have been providing consultancy and development to our partners for over 15 years. Our Founder Christopher Thomas writes and reviews AI publications, a selection are listed below.

Christopher Thomas, with an honours degree in AI and Machine Learning, is a innovator who has dedicated over 23 years to advancing technology in various fields. His journey into Artificial Intelligence began more than two decades ago, marked by a cited thesis on image processing and edge detection using Machine Learning, Evolutionary Programming and Genetic Algorithms. This work has been recognized in five academic papers, illustrating Christopher’s early contributions to the field.

As a Professional Member of the Institute of Analysts and Programmers and a technical reviewer for the Springer Nature Group in AI and Deep Learning publications, Christopher has influenced the academic and professional realms of AI. His work in developing efficient Computer Vision Neural Networks for EdgeAI has been particularly notable.

Christopher's expertise has been sought globally. He plays a role in a Global Education Project, joining forces with entities such as Imagination Technologies, Madrid University, and Peking University. Here, he served as a Consultant and Technical Reviewer, sharing his knowledge in EdgeAI with budding minds across the world.

In 1998, Christopher embarked on a defining project – training his first neural network. This early achievement was a precursor to his later accomplishments, including setting records for training model accuracy in the Fashion MNIST dataset.

Christopher’s involvement in AI projects and Generative AI speaks to his versatility and forward-thinking approach. This, combined with his deep understanding of Ethical AI and his accomplishments as a graduate of many fast.ai courses, positions him as a leader in responsible AI development.

As a keen author on both cutting edge and introductory Artificial Intelligence topics within the Towards Data Science publication and elsewhere, Christopher’s articles have been read over half a million times including one being cited in University theses.

Through his career Christopher has worked internationally with clients in the automotive sector, having also worked in the semiconductor industry, medical, Government, communications and marketing sectors.

Christopher brings his rich history in AI and a lifelong passion for technology – dating back to his early childhood programming days on the Commodore pet and ZX Spectrum in the 1980s and his early use of Speech synthesis on AmigaOS.

Latent Diffusion and Perceptual Latent Loss

This article shows a novel approach to training a generative model for image generation at reduced training times using latents and using a pre-trained ImageNet latent classifier as a component of the loss function.

Super Resolution: Adobe Photoshop versus Leading Deep Neural Networks

How effective is Adobe’s Super Resolution compared to the leading super resolution deep neural network models? This article attempts to evaluate that and the results of Adobe’s Super Resolution are very impressive.

Super Convergence with Cyclical Learning Rates in TensorFlow

Super-Convergence using Cyclical Learning Rate schedules is one of the most useful techniques in deep learning and very often overlooked. It allows for rapid prototyping of network architectures, loss function engineering, data augmentation experiments and training production ready models in orders of magnitude less training time and epochs.

Deep learning image enhancement insights on loss function engineering

Insights on techniques and loss function engineering for Super Resolution, Colourisation, and style transfer.

Deep learning based super resolution, without using a GAN

This article describes the techniques and training a deep learning model for image improvement, image restoration, inpainting and super resolution.

U-Net deep learning colourisation of greyscale images

This article describes experiments training a neural network to generate 3 channel colour images from single channel greyscale images using deep learning. In my opinion the results, whilst they vary by subject matter are astounding, with the model hallucinating what colours should be in the original subject matter.

Recurrent Neural Networks and Natural Language Processing.

Recurrent Neural Networks (RNNs) are a form of machine learning algorithm that are ideal for sequential data such as text, time series, financial data, speech, audio, video among others.

Loss functions based on feature activation and style loss.

Loss functions using these techniques can be used during the training of U-Net based model architectures and could be applied to the training of other Convolutional Neural Networks that are generating an image as their predication/output.

U-Nets with ResNet Encoders and cross connections

A U-Net architecture with cross connections similar to a DenseNet

Random forests — a free lunch that’s not cursed

Random forests are one of a group of machine learning techniques called ensembles of decision trees, where essentially several decision trees are bagged together and take the average prediction.

An introduction to Convolutional Neural Networks

Describing what Convolutional Neural Networks are, how they function, how they can be used and why they are so powerful.

Tabular data analysis with deep neural nets

Deep neural networks are now an effective technique for tabular data analysis, requiring little feature engineering and less maintenance than other techniques.