I2C Protocol Subtleties – Part 1

This article is the first in a series describing the more ‘subtle’ aspects of the I2C Protocol, originally developed by Philips.

Since you’re reading this series, I’m assuming you already know what the I2C bus is, and you’re looking to avoid some pain when you need to use it in a project. If so, you’ve come to the right place. If not, I’ll be adding some introductory I2C information soon at my website.

Just so we’re clear, this series will not include coverage of the High-speed mode, as this is substantially different from the design and behavior of the normal 2-wire shared-bus implementation, and is also not that commonly used. There’s plenty of excellent reference material available on the Web that covers this mode.

Here’s a quick list of what will be covered in the rest of the series:

  • missing START
  • missing STOP
  • Repeated START
  • missing data bits
  • missing ACK/NAK
  • data after NAK
  • back-to-back errors
  • pullup resistors
  • bus repeaters
  • implementation using a full-hardware TWI or I2C peripheral
  • implementation using a USI peripheral
  • implementation using a USART peripheral
  • SMBus differences from I2C

Now, on to the good stuff!

For this article, we will focus on the 3 types of implementations you’ll find in designs today: full hardware, hardware/software mix, and full software (or ‘bit-bang’ as it is sometimes called).

Many microcontrollers today, even some low-end devices, include a fully-hardware I2C peripheral. Atmel refers to theirs as TWI, Microchip calls theirs I2C; other vendors use similar naming. When using a fully-hardware approach, it is actually difficult to generate any kind of bus error unless you misunderstand how the peripheral works or what a correct I2C bus sequence should look like. In general, though, this approach requires the least in-depth understanding of the protocol itself.

The USI peripheral found in some Atmel devices is a minimal-hardware design that depends on software interaction to make it a complete implementation. This versatile peripheral can actually be used for I2C, SPI and UART configurations, and is appropriate for low-end devices where adding all three peripherals would be cost-prohibitive. Although it requires more coding than a TWI or full-hardware I2C peripheral, it is in some ways more flexible. This approach requres a more in-depth understanding of the protocol, as you are responsible for moving from one state to the next, and it is possible to go in the wrong direction.

Lastly, implementing a 100% software approach demands a full understanding of the I2C protocol. Virtually every microcontroller vendor provides application notes and code examples for creating an I2C Master device using a pure-software solution. Unlike a UART, I2C is a clocked (rather than timed) protocol, so interruptions in the execution of the protocol are tolerated well, allowing interrupts to be serviced without concern for losing data. The maximum speed of the software-based solution is ultimately determined by the CPU clock speed, and usually a Master implementation can easily reach the 400KHz rate.

A software-based implementation of a Slave device is much more challenging. Without hardware support, the software must monitor both the SDA and the SCL lines simultaneously in order to detect clock edges and know positively the state of the SDA line prior to the rise or fall of SCL. Detection of a START or STOP condition will usually require the use of interrupts, otherwise the software would need to be 100% consumed with monitoring SCL and SDA. Software-based Slave implementations tend to be CPU-bound, requiring several MIPS to achieve even 100KHz operation. Therefore, true software-only Slave implementations may not even exist for some microcontroller families, and others may not be capable of reaching full 100KHz bus speed.

With this hardware and software foundation having been laid, we will dive deeper into the protocol itself in our next article. Thanks for reading!

(Copyright 2010 Robert G. Fries)

Leave a Reply